Test Report: Hyper-V_Windows 18485

                    
                      bdd124d1e5a6e86e5bd4f9e512befe1eefe531bd:2024-03-28:33775
                    
                

Test fail (14/193)

x
+
TestAddons/Setup (225.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-120100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p addons-120100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: exit status 90 (3m45.8004461s)

                                                
                                                
-- stdout --
	* [addons-120100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "addons-120100" primary control-plane node in "addons-120100" cluster
	* Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:29:04.375035    7424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0327 23:29:04.454128    7424 out.go:291] Setting OutFile to fd 884 ...
	I0327 23:29:04.454862    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:29:04.454862    7424 out.go:304] Setting ErrFile to fd 888...
	I0327 23:29:04.454862    7424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:29:04.479575    7424 out.go:298] Setting JSON to false
	I0327 23:29:04.482242    7424 start.go:129] hostinfo: {"hostname":"minikube6","uptime":4805,"bootTime":1711577338,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:29:04.482827    7424 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:29:04.488255    7424 out.go:177] * [addons-120100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:29:04.500138    7424 notify.go:220] Checking for updates...
	I0327 23:29:04.501849    7424 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:29:04.505816    7424 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:29:04.508653    7424 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:29:04.510311    7424 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:29:04.513479    7424 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:29:04.516281    7424 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:29:10.364359    7424 out.go:177] * Using the hyperv driver based on user configuration
	I0327 23:29:10.368617    7424 start.go:297] selected driver: hyperv
	I0327 23:29:10.368617    7424 start.go:901] validating driver "hyperv" against <nil>
	I0327 23:29:10.368617    7424 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:29:10.419635    7424 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:29:10.420376    7424 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:29:10.420376    7424 cni.go:84] Creating CNI manager for ""
	I0327 23:29:10.420376    7424 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:29:10.420376    7424 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:29:10.421305    7424 start.go:340] cluster config:
	{Name:addons-120100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-120100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:29:10.421305    7424 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:29:10.427025    7424 out.go:177] * Starting "addons-120100" primary control-plane node in "addons-120100" cluster
	I0327 23:29:10.430687    7424 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 23:29:10.430791    7424 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0327 23:29:10.430791    7424 cache.go:56] Caching tarball of preloaded images
	I0327 23:29:10.430791    7424 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0327 23:29:10.431331    7424 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 23:29:10.432294    7424 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-120100\config.json ...
	I0327 23:29:10.432730    7424 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-120100\config.json: {Name:mk98051522647aea421d8d7665e4bfc3be9ec339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:29:10.433463    7424 start.go:360] acquireMachinesLock for addons-120100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:29:10.434136    7424 start.go:364] duration metric: took 72µs to acquireMachinesLock for "addons-120100"
	I0327 23:29:10.434136    7424 start.go:93] Provisioning new machine with config: &{Name:addons-120100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:addons-120100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 23:29:10.434136    7424 start.go:125] createHost starting for "" (driver="hyperv")
	I0327 23:29:10.436929    7424 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 23:29:10.436929    7424 start.go:159] libmachine.API.Create for "addons-120100" (driver="hyperv")
	I0327 23:29:10.437457    7424 client.go:168] LocalClient.Create starting
	I0327 23:29:10.438154    7424 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0327 23:29:10.742174    7424 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0327 23:29:11.168221    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0327 23:29:13.489208    7424 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0327 23:29:13.489208    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:13.489208    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0327 23:29:15.392361    7424 main.go:141] libmachine: [stdout =====>] : False
	
	I0327 23:29:15.392691    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:15.392691    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0327 23:29:16.983824    7424 main.go:141] libmachine: [stdout =====>] : True
	
	I0327 23:29:16.984070    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:16.984274    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0327 23:29:21.010948    7424 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0327 23:29:21.011554    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:21.013679    7424 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0327 23:29:21.515552    7424 main.go:141] libmachine: Creating SSH key...
	I0327 23:29:21.709790    7424 main.go:141] libmachine: Creating VM...
	I0327 23:29:21.709790    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0327 23:29:24.733727    7424 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0327 23:29:24.734394    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:24.734641    7424 main.go:141] libmachine: Using switch "Default Switch"
	I0327 23:29:24.734782    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0327 23:29:26.625434    7424 main.go:141] libmachine: [stdout =====>] : True
	
	I0327 23:29:26.626309    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:26.626367    7424 main.go:141] libmachine: Creating VHD
	I0327 23:29:26.626367    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0327 23:29:30.538160    7424 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 533D11DF-399B-4ED2-B239-6FEC94AFA130
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0327 23:29:30.538160    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:30.539240    7424 main.go:141] libmachine: Writing magic tar header
	I0327 23:29:30.539321    7424 main.go:141] libmachine: Writing SSH key tar header
	I0327 23:29:30.547941    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0327 23:29:33.915469    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:33.915721    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:33.915810    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\disk.vhd' -SizeBytes 20000MB
	I0327 23:29:36.593495    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:36.594240    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:36.594334    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-120100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0327 23:29:41.280813    7424 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-120100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0327 23:29:41.281270    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:41.281351    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-120100 -DynamicMemoryEnabled $false
	I0327 23:29:43.690487    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:43.691397    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:43.691397    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-120100 -Count 2
	I0327 23:29:46.041087    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:46.041087    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:46.041240    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-120100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\boot2docker.iso'
	I0327 23:29:48.882767    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:48.882767    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:48.883361    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-120100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\disk.vhd'
	I0327 23:29:51.752166    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:51.752166    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:51.752166    7424 main.go:141] libmachine: Starting VM...
	I0327 23:29:51.752426    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-120100
	I0327 23:29:55.140178    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:29:55.140331    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:55.140331    7424 main.go:141] libmachine: Waiting for host to start...
	I0327 23:29:55.140331    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:29:57.524379    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:29:57.524379    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:29:57.524379    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:00.216040    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:30:00.216179    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:01.225975    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:03.594521    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:03.595450    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:03.595450    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:06.318231    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:30:06.318345    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:07.327739    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:09.704885    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:09.704885    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:09.704885    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:12.432827    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:30:12.432827    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:13.442130    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:15.866927    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:15.867201    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:15.867266    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:18.545990    7424 main.go:141] libmachine: [stdout =====>] : 
	I0327 23:30:18.545990    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:19.549694    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:21.936837    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:21.936837    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:21.936837    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:24.692613    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:24.692613    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:24.692811    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:26.984195    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:26.984195    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:26.984195    7424 machine.go:94] provisionDockerMachine start ...
	I0327 23:30:26.984195    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:29.297710    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:29.298051    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:29.298051    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:32.070538    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:32.070538    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:32.077051    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:30:32.093903    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:30:32.093903    7424 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 23:30:32.225963    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 23:30:32.225963    7424 buildroot.go:166] provisioning hostname "addons-120100"
	I0327 23:30:32.225963    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:34.524639    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:34.524639    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:34.524639    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:37.251736    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:37.252264    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:37.258011    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:30:37.259140    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:30:37.259140    7424 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-120100 && echo "addons-120100" | sudo tee /etc/hostname
	I0327 23:30:37.419179    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-120100
	
	I0327 23:30:37.419179    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:39.658973    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:39.658973    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:39.659697    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:42.345677    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:42.345677    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:42.351954    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:30:42.352685    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:30:42.352685    7424 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-120100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-120100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-120100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:30:42.504614    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:30:42.504614    7424 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0327 23:30:42.504614    7424 buildroot.go:174] setting up certificates
	I0327 23:30:42.504614    7424 provision.go:84] configureAuth start
	I0327 23:30:42.504614    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:44.813794    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:44.813794    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:44.814500    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:47.539666    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:47.539666    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:47.539666    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:49.867775    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:49.867775    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:49.868302    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:52.642346    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:52.642346    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:52.642346    7424 provision.go:143] copyHostCerts
	I0327 23:30:52.643842    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0327 23:30:52.645652    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0327 23:30:52.647131    7424 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0327 23:30:52.648232    7424 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-120100 san=[127.0.0.1 172.28.232.103 addons-120100 localhost minikube]
	I0327 23:30:52.838043    7424 provision.go:177] copyRemoteCerts
	I0327 23:30:52.853559    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:30:52.853559    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:30:55.138204    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:30:55.138285    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:55.138285    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:30:57.880430    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:30:57.880430    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:30:57.881170    7424 sshutil.go:53] new ssh client: &{IP:172.28.232.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\id_rsa Username:docker}
	I0327 23:30:57.983208    7424 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1296222s)
	I0327 23:30:57.983627    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:30:58.036430    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0327 23:30:58.087705    7424 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:30:58.135337    7424 provision.go:87] duration metric: took 15.6306417s to configureAuth
	I0327 23:30:58.135337    7424 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:30:58.136096    7424 config.go:182] Loaded profile config "addons-120100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:30:58.136096    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:00.434851    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:00.434851    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:00.434851    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:03.198784    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:03.198959    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:03.205488    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:31:03.206146    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:31:03.206146    7424 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 23:31:03.332668    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 23:31:03.332813    7424 buildroot.go:70] root file system type: tmpfs
	I0327 23:31:03.333008    7424 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 23:31:03.333098    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:05.590079    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:05.590171    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:05.590251    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:08.317272    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:08.317272    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:08.323465    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:31:08.324156    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:31:08.324156    7424 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 23:31:08.476837    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 23:31:08.476984    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:10.732664    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:10.733543    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:10.733543    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:13.441886    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:13.441886    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:13.448400    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:31:13.449316    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:31:13.449316    7424 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 23:31:15.656672    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 23:31:15.656672    7424 machine.go:97] duration metric: took 48.6722238s to provisionDockerMachine
	I0327 23:31:15.656672    7424 client.go:171] duration metric: took 2m5.2184738s to LocalClient.Create
	I0327 23:31:15.656672    7424 start.go:167] duration metric: took 2m5.2190924s to libmachine.API.Create "addons-120100"
	I0327 23:31:15.656672    7424 start.go:293] postStartSetup for "addons-120100" (driver="hyperv")
	I0327 23:31:15.656672    7424 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:31:15.670312    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:31:15.670312    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:18.001210    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:18.001210    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:18.001726    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:20.757360    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:20.757360    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:20.758334    7424 sshutil.go:53] new ssh client: &{IP:172.28.232.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\id_rsa Username:docker}
	I0327 23:31:20.866561    7424 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1961291s)
	I0327 23:31:20.881346    7424 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:31:20.888936    7424 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:31:20.888936    7424 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0327 23:31:20.888936    7424 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0327 23:31:20.889522    7424 start.go:296] duration metric: took 5.2328223s for postStartSetup
	I0327 23:31:20.892414    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:23.213620    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:23.213696    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:23.213780    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:25.999339    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:25.999408    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:25.999663    7424 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-120100\config.json ...
	I0327 23:31:26.002563    7424 start.go:128] duration metric: took 2m15.5677218s to createHost
	I0327 23:31:26.002665    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:28.313652    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:28.314364    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:28.314364    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:31.110080    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:31.110146    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:31.117096    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:31:31.117238    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:31:31.117238    7424 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 23:31:31.241254    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711582291.241521846
	
	I0327 23:31:31.241254    7424 fix.go:216] guest clock: 1711582291.241521846
	I0327 23:31:31.241254    7424 fix.go:229] Guest: 2024-03-27 23:31:31.241521846 +0000 UTC Remote: 2024-03-27 23:31:26.0026655 +0000 UTC m=+141.734417201 (delta=5.238856346s)
	I0327 23:31:31.241832    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:33.531703    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:33.531703    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:33.532252    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:36.255095    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:36.255095    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:36.261538    7424 main.go:141] libmachine: Using SSH client type: native
	I0327 23:31:36.261538    7424 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.232.103 22 <nil> <nil>}
	I0327 23:31:36.261538    7424 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711582291
	I0327 23:31:36.402968    7424 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Mar 27 23:31:31 UTC 2024
	
	I0327 23:31:36.402968    7424 fix.go:236] clock set: Wed Mar 27 23:31:31 UTC 2024
	 (err=<nil>)
	I0327 23:31:36.402968    7424 start.go:83] releasing machines lock for "addons-120100", held for 2m25.9680726s
	I0327 23:31:36.402968    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:38.688676    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:38.689668    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:38.689733    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:41.426688    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:41.426688    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:41.432190    7424 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:31:41.432344    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:41.443440    7424 ssh_runner.go:195] Run: cat /version.json
	I0327 23:31:41.444403    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-120100 ).state
	I0327 23:31:43.756543    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:43.756757    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:43.756757    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:43.796282    7424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:31:43.796586    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:43.796586    7424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-120100 ).networkadapters[0]).ipaddresses[0]
	I0327 23:31:46.594702    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:46.594702    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:46.595722    7424 sshutil.go:53] new ssh client: &{IP:172.28.232.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\id_rsa Username:docker}
	I0327 23:31:46.626518    7424 main.go:141] libmachine: [stdout =====>] : 172.28.232.103
	
	I0327 23:31:46.626518    7424 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:31:46.627055    7424 sshutil.go:53] new ssh client: &{IP:172.28.232.103 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-120100\id_rsa Username:docker}
	I0327 23:31:46.690549    7424 ssh_runner.go:235] Completed: cat /version.json: (5.2470815s)
	I0327 23:31:46.702910    7424 ssh_runner.go:195] Run: systemctl --version
	I0327 23:31:46.780996    7424 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.348779s)
	I0327 23:31:46.793657    7424 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 23:31:46.803524    7424 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:31:46.817563    7424 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:31:46.848886    7424 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 23:31:46.848968    7424 start.go:494] detecting cgroup driver to use...
	I0327 23:31:46.849472    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:31:46.899721    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 23:31:46.933250    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 23:31:46.954995    7424 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 23:31:46.968502    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 23:31:47.004435    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:31:47.040711    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 23:31:47.075765    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:31:47.111703    7424 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:31:47.149702    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 23:31:47.186607    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 23:31:47.224155    7424 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 23:31:47.259012    7424 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:31:47.292223    7424 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:31:47.330025    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:31:47.549398    7424 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 23:31:47.587580    7424 start.go:494] detecting cgroup driver to use...
	I0327 23:31:47.603620    7424 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 23:31:47.643418    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:31:47.680874    7424 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:31:47.731772    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:31:47.775085    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 23:31:47.818572    7424 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 23:31:47.891994    7424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 23:31:47.920287    7424 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:31:47.973693    7424 ssh_runner.go:195] Run: which cri-dockerd
	I0327 23:31:47.994738    7424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 23:31:48.015737    7424 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 23:31:48.064856    7424 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 23:31:48.285529    7424 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 23:31:48.492759    7424 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 23:31:48.493020    7424 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 23:31:48.557852    7424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:31:48.794262    7424 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 23:32:49.944492    7424 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1499106s)
	I0327 23:32:49.959604    7424 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0327 23:32:49.996476    7424 out.go:177] 
	W0327 23:32:49.999280    7424 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 27 23:31:14 addons-120100 systemd[1]: Starting Docker Application Container Engine...
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.088636550Z" level=info msg="Starting up"
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.090269455Z" level=info msg="containerd not running, starting managed containerd"
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.093885867Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.127814872Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157058162Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157203063Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157288663Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157325363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157432863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157542064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158037465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158141266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158165966Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158178166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158296866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158684867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163046681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163162681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163319482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163416982Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163529982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163675683Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163713783Z" level=info msg="metadata content store policy set" policy=shared
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238637215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238717715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238784115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238807815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238827515Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.239062916Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.240789522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241096522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241191723Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241261023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241393323Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241475424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241541024Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241694324Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241817625Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241889525Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241947625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242226326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242751628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242835428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242894228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242954628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243007328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243057729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243106929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243158629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243209529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243255629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243273029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243288729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243305129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243324429Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243351829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243367330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243380530Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243436030Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243456130Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243469330Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243481830Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243763431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243799731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243827131Z" level=info msg="NRI interface is disabled by configuration."
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244092532Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244174332Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244231632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244278332Z" level=info msg="containerd successfully booted in 0.118103s"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.161332460Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.193211752Z" level=info msg="Loading containers: start."
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.493061223Z" level=info msg="Loading containers: done."
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.526921412Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.527139113Z" level=info msg="Daemon has completed initialization"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.653383999Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.653536999Z" level=info msg="API listen on [::]:2376"
	Mar 27 23:31:15 addons-120100 systemd[1]: Started Docker Application Container Engine.
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.823011379Z" level=info msg="Processing signal 'terminated'"
	Mar 27 23:31:48 addons-120100 systemd[1]: Stopping Docker Application Container Engine...
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825298578Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825787078Z" level=info msg="Daemon shutdown complete"
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825885977Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825919177Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 27 23:31:49 addons-120100 systemd[1]: docker.service: Deactivated successfully.
	Mar 27 23:31:49 addons-120100 systemd[1]: Stopped Docker Application Container Engine.
	Mar 27 23:31:49 addons-120100 systemd[1]: Starting Docker Application Container Engine...
	Mar 27 23:31:49 addons-120100 dockerd[1013]: time="2024-03-27T23:31:49.909323488Z" level=info msg="Starting up"
	Mar 27 23:32:49 addons-120100 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 27 23:32:49 addons-120100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 27 23:32:49 addons-120100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 27 23:32:49 addons-120100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 27 23:31:14 addons-120100 systemd[1]: Starting Docker Application Container Engine...
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.088636550Z" level=info msg="Starting up"
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.090269455Z" level=info msg="containerd not running, starting managed containerd"
	Mar 27 23:31:14 addons-120100 dockerd[662]: time="2024-03-27T23:31:14.093885867Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=668
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.127814872Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157058162Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157203063Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157288663Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157325363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157432863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.157542064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158037465Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158141266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158165966Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158178166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158296866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.158684867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163046681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163162681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163319482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163416982Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163529982Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163675683Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.163713783Z" level=info msg="metadata content store policy set" policy=shared
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238637215Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238717715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238784115Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238807815Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.238827515Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.239062916Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.240789522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241096522Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241191723Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241261023Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241393323Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241475424Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241541024Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241694324Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241817625Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241889525Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.241947625Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242226326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242751628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242835428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242894228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.242954628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243007328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243057729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243106929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243158629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243209529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243255629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243273029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243288729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243305129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243324429Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243351829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243367330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243380530Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243436030Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243456130Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243469330Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243481830Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243763431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243799731Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.243827131Z" level=info msg="NRI interface is disabled by configuration."
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244092532Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244174332Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244231632Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 27 23:31:14 addons-120100 dockerd[668]: time="2024-03-27T23:31:14.244278332Z" level=info msg="containerd successfully booted in 0.118103s"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.161332460Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.193211752Z" level=info msg="Loading containers: start."
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.493061223Z" level=info msg="Loading containers: done."
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.526921412Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.527139113Z" level=info msg="Daemon has completed initialization"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.653383999Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 27 23:31:15 addons-120100 dockerd[662]: time="2024-03-27T23:31:15.653536999Z" level=info msg="API listen on [::]:2376"
	Mar 27 23:31:15 addons-120100 systemd[1]: Started Docker Application Container Engine.
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.823011379Z" level=info msg="Processing signal 'terminated'"
	Mar 27 23:31:48 addons-120100 systemd[1]: Stopping Docker Application Container Engine...
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825298578Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825787078Z" level=info msg="Daemon shutdown complete"
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825885977Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 27 23:31:48 addons-120100 dockerd[662]: time="2024-03-27T23:31:48.825919177Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 27 23:31:49 addons-120100 systemd[1]: docker.service: Deactivated successfully.
	Mar 27 23:31:49 addons-120100 systemd[1]: Stopped Docker Application Container Engine.
	Mar 27 23:31:49 addons-120100 systemd[1]: Starting Docker Application Container Engine...
	Mar 27 23:31:49 addons-120100 dockerd[1013]: time="2024-03-27T23:31:49.909323488Z" level=info msg="Starting up"
	Mar 27 23:32:49 addons-120100 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 27 23:32:49 addons-120100 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 27 23:32:49 addons-120100 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 27 23:32:49 addons-120100 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0327 23:32:49.999603    7424 out.go:239] * 
	* 
	W0327 23:32:50.001732    7424 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 23:32:50.004794    7424 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-windows-amd64.exe start -p addons-120100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: exit status 90
--- FAIL: TestAddons/Setup (225.97s)

                                                
                                    
x
+
TestErrorSpam/setup (209.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-199000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-199000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 --driver=hyperv: (3m29.5135975s)
error_spam_test.go:96: unexpected stderr: "W0327 23:34:05.213896    7552 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-199000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=18485
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-199000" primary control-plane node in "nospam-199000" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-199000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0327 23:34:05.213896    7552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (209.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (36.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-848700 -n functional-848700
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-848700 -n functional-848700: (12.9402329s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 logs -n 25: (9.3003914s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| pause   | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:38 UTC | 27 Mar 24 23:38 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | pause                                                       |                   |                   |                |                     |                     |
	| unpause | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:38 UTC | 27 Mar 24 23:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:39 UTC | 27 Mar 24 23:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| unpause | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:39 UTC | 27 Mar 24 23:39 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | unpause                                                     |                   |                   |                |                     |                     |
	| stop    | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:39 UTC | 27 Mar 24 23:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:40 UTC | 27 Mar 24 23:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| stop    | nospam-199000 --log_dir                                     | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:40 UTC | 27 Mar 24 23:40 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 |                   |                   |                |                     |                     |
	|         | stop                                                        |                   |                   |                |                     |                     |
	| delete  | -p nospam-199000                                            | nospam-199000     | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:40 UTC | 27 Mar 24 23:40 UTC |
	| start   | -p functional-848700                                        | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:40 UTC | 27 Mar 24 23:45 UTC |
	|         | --memory=4000                                               |                   |                   |                |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |                |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |                |                     |                     |
	| start   | -p functional-848700                                        | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:45 UTC | 27 Mar 24 23:47 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache add                                 | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache add                                 | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache add                                 | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache add                                 | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | minikube-local-cache-test:functional-848700                 |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache delete                              | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | minikube-local-cache-test:functional-848700                 |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |                |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:47 UTC |
	| ssh     | functional-848700 ssh sudo                                  | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:47 UTC | 27 Mar 24 23:48 UTC |
	|         | crictl images                                               |                   |                   |                |                     |                     |
	| ssh     | functional-848700                                           | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| ssh     | functional-848700 ssh                                       | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | functional-848700 cache reload                              | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	| ssh     | functional-848700 ssh                                       | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |                |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |                |                     |                     |
	| kubectl | functional-848700 kubectl --                                | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:48 UTC | 27 Mar 24 23:48 UTC |
	|         | --context functional-848700                                 |                   |                   |                |                     |                     |
	|         | get pods                                                    |                   |                   |                |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:45:03
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:45:03.233272    8488 out.go:291] Setting OutFile to fd 728 ...
	I0327 23:45:03.233272    8488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:45:03.233272    8488 out.go:304] Setting ErrFile to fd 856...
	I0327 23:45:03.233272    8488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:45:03.257306    8488 out.go:298] Setting JSON to false
	I0327 23:45:03.261605    8488 start.go:129] hostinfo: {"hostname":"minikube6","uptime":5764,"bootTime":1711577338,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:45:03.261682    8488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:45:03.267682    8488 out.go:177] * [functional-848700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:45:03.270090    8488 notify.go:220] Checking for updates...
	I0327 23:45:03.272167    8488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:45:03.275783    8488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:45:03.278920    8488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:45:03.280921    8488 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:45:03.283829    8488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:45:03.291261    8488 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:45:03.291394    8488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:45:09.020009    8488 out.go:177] * Using the hyperv driver based on existing profile
	I0327 23:45:09.023555    8488 start.go:297] selected driver: hyperv
	I0327 23:45:09.023555    8488 start.go:901] validating driver "hyperv" against &{Name:functional-848700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:functional-848700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.236.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:45:09.023555    8488 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 23:45:09.080266    8488 cni.go:84] Creating CNI manager for ""
	I0327 23:45:09.080266    8488 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:45:09.081039    8488 start.go:340] cluster config:
	{Name:functional-848700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-848700 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.236.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:45:09.081075    8488 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:45:09.086744    8488 out.go:177] * Starting "functional-848700" primary control-plane node in "functional-848700" cluster
	I0327 23:45:09.089899    8488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 23:45:09.089899    8488 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0327 23:45:09.089899    8488 cache.go:56] Caching tarball of preloaded images
	I0327 23:45:09.090736    8488 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0327 23:45:09.090766    8488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 23:45:09.091101    8488 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\config.json ...
	I0327 23:45:09.093974    8488 start.go:360] acquireMachinesLock for functional-848700: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 23:45:09.094221    8488 start.go:364] duration metric: took 150.5µs to acquireMachinesLock for "functional-848700"
	I0327 23:45:09.094424    8488 start.go:96] Skipping create...Using existing machine configuration
	I0327 23:45:09.094474    8488 fix.go:54] fixHost starting: 
	I0327 23:45:09.094474    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:12.098787    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:12.098787    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:12.098787    8488 fix.go:112] recreateIfNeeded on functional-848700: state=Running err=<nil>
	W0327 23:45:12.098787    8488 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 23:45:12.104541    8488 out.go:177] * Updating the running hyperv "functional-848700" VM ...
	I0327 23:45:12.109353    8488 machine.go:94] provisionDockerMachine start ...
	I0327 23:45:12.109353    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:14.428506    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:14.428506    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:14.428824    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:17.191858    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:17.191858    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:17.201209    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:17.201913    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:17.201913    8488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 23:45:17.350500    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-848700
	
	I0327 23:45:17.350500    8488 buildroot.go:166] provisioning hostname "functional-848700"
	I0327 23:45:17.350629    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:19.658137    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:19.659189    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:19.659295    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:22.445066    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:22.445066    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:22.451998    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:22.451998    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:22.451998    8488 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-848700 && echo "functional-848700" | sudo tee /etc/hostname
	I0327 23:45:22.628632    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-848700
	
	I0327 23:45:22.628693    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:24.898152    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:24.898152    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:24.898314    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:27.669041    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:27.669041    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:27.675765    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:27.677082    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:27.677082    8488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-848700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-848700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-848700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 23:45:27.827148    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:45:27.827259    8488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0327 23:45:27.827259    8488 buildroot.go:174] setting up certificates
	I0327 23:45:27.827259    8488 provision.go:84] configureAuth start
	I0327 23:45:27.827447    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:30.199219    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:30.199642    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:30.199722    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:33.021566    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:33.021566    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:33.022352    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:35.333736    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:35.333736    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:35.334550    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:38.119438    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:38.119438    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:38.119438    8488 provision.go:143] copyHostCerts
	I0327 23:45:38.120191    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0327 23:45:38.120578    8488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0327 23:45:38.120653    8488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0327 23:45:38.121129    8488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0327 23:45:38.122187    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0327 23:45:38.122301    8488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0327 23:45:38.122301    8488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0327 23:45:38.122301    8488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0327 23:45:38.123759    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0327 23:45:38.123996    8488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0327 23:45:38.123996    8488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0327 23:45:38.124443    8488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0327 23:45:38.125423    8488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-848700 san=[127.0.0.1 172.28.236.250 functional-848700 localhost minikube]
	I0327 23:45:38.323582    8488 provision.go:177] copyRemoteCerts
	I0327 23:45:38.335460    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 23:45:38.335460    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:40.660150    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:40.660332    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:40.660694    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:43.467168    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:43.468221    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:43.468499    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:45:43.581328    8488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2458085s)
	I0327 23:45:43.581328    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0327 23:45:43.582485    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 23:45:43.633560    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0327 23:45:43.634104    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0327 23:45:43.688050    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0327 23:45:43.688050    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 23:45:43.741681    8488 provision.go:87] duration metric: took 15.9141964s to configureAuth
	I0327 23:45:43.741681    8488 buildroot.go:189] setting minikube options for container-runtime
	I0327 23:45:43.742353    8488 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:45:43.742428    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:46.071853    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:46.071853    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:46.072322    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:48.877987    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:48.877987    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:48.887533    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:48.888224    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:48.888224    8488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 23:45:49.043518    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 23:45:49.043518    8488 buildroot.go:70] root file system type: tmpfs
	I0327 23:45:49.043518    8488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 23:45:49.044054    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:51.390110    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:51.390110    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:51.390654    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:54.134771    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:54.134771    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:54.141819    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:54.141819    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:54.142964    8488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 23:45:54.318452    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 23:45:54.318452    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:45:56.658787    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:45:56.658970    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:56.659127    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:45:59.413450    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:45:59.413450    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:45:59.420781    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:45:59.421561    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:45:59.421561    8488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 23:45:59.568540    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 23:45:59.568540    8488 machine.go:97] duration metric: took 47.4589212s to provisionDockerMachine
	I0327 23:45:59.568540    8488 start.go:293] postStartSetup for "functional-848700" (driver="hyperv")
	I0327 23:45:59.568657    8488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 23:45:59.582148    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 23:45:59.582148    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:01.867293    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:01.868328    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:01.868392    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:04.632758    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:04.632758    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:04.634224    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:46:04.742087    8488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1599097s)
	I0327 23:46:04.755022    8488 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 23:46:04.762371    8488 command_runner.go:130] > NAME=Buildroot
	I0327 23:46:04.762485    8488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0327 23:46:04.762485    8488 command_runner.go:130] > ID=buildroot
	I0327 23:46:04.762485    8488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0327 23:46:04.762485    8488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0327 23:46:04.762595    8488 info.go:137] Remote host: Buildroot 2023.02.9
	I0327 23:46:04.762595    8488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0327 23:46:04.763042    8488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0327 23:46:04.763479    8488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0327 23:46:04.763479    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0327 23:46:04.764448    8488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\10460\hosts -> hosts in /etc/test/nested/copy/10460
	I0327 23:46:04.764448    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\10460\hosts -> /etc/test/nested/copy/10460/hosts
	I0327 23:46:04.777028    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/10460
	I0327 23:46:04.797738    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0327 23:46:04.851830    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\10460\hosts --> /etc/test/nested/copy/10460/hosts (40 bytes)
	I0327 23:46:04.909735    8488 start.go:296] duration metric: took 5.3410485s for postStartSetup
	I0327 23:46:04.909735    8488 fix.go:56] duration metric: took 55.8149981s for fixHost
	I0327 23:46:04.909735    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:07.211511    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:07.212017    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:07.212184    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:09.970630    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:09.971569    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:09.978780    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:46:09.979097    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:46:09.979097    8488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 23:46:10.117961    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583170.127560394
	
	I0327 23:46:10.117961    8488 fix.go:216] guest clock: 1711583170.127560394
	I0327 23:46:10.117961    8488 fix.go:229] Guest: 2024-03-27 23:46:10.127560394 +0000 UTC Remote: 2024-03-27 23:46:04.9097354 +0000 UTC m=+61.857137401 (delta=5.217824994s)
	I0327 23:46:10.117961    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:12.405263    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:12.405263    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:12.405263    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:15.150302    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:15.150730    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:15.157088    8488 main.go:141] libmachine: Using SSH client type: native
	I0327 23:46:15.157687    8488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.236.250 22 <nil> <nil>}
	I0327 23:46:15.157687    8488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711583170
	I0327 23:46:15.324460    8488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Mar 27 23:46:10 UTC 2024
	
	I0327 23:46:15.324503    8488 fix.go:236] clock set: Wed Mar 27 23:46:10 UTC 2024
	 (err=<nil>)
	I0327 23:46:15.324503    8488 start.go:83] releasing machines lock for "functional-848700", held for 1m6.2299116s
	I0327 23:46:15.324805    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:17.610659    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:17.610739    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:17.610806    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:20.397865    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:20.397865    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:20.404938    8488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 23:46:20.405188    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:20.415839    8488 ssh_runner.go:195] Run: cat /version.json
	I0327 23:46:20.415839    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:46:22.721484    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:22.721536    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:22.721536    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:22.734783    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:46:22.734783    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:22.734783    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:46:25.580282    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:25.580574    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:25.580792    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:46:25.611331    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:46:25.611366    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:46:25.611473    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:46:25.754042    8488 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0327 23:46:25.754042    8488 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0327 23:46:25.754042    8488 ssh_runner.go:235] Completed: cat /version.json: (5.338174s)
	I0327 23:46:25.754042    8488 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3490162s)
	I0327 23:46:25.768532    8488 ssh_runner.go:195] Run: systemctl --version
	I0327 23:46:25.779371    8488 command_runner.go:130] > systemd 252 (252)
	I0327 23:46:25.779371    8488 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0327 23:46:25.792321    8488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0327 23:46:25.803251    8488 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0327 23:46:25.805231    8488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 23:46:25.822335    8488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0327 23:46:25.847458    8488 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0327 23:46:25.847622    8488 start.go:494] detecting cgroup driver to use...
	I0327 23:46:25.847715    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:46:25.888003    8488 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0327 23:46:25.901895    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0327 23:46:25.939398    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 23:46:25.965088    8488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 23:46:25.978870    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 23:46:26.015434    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:46:26.052342    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 23:46:26.088336    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 23:46:26.124371    8488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 23:46:26.161692    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 23:46:26.196701    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 23:46:26.230680    8488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 23:46:26.267006    8488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 23:46:26.290729    8488 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0327 23:46:26.307414    8488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 23:46:26.345503    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:46:26.648672    8488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 23:46:26.691832    8488 start.go:494] detecting cgroup driver to use...
	I0327 23:46:26.705920    8488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 23:46:26.738051    8488 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0327 23:46:26.738051    8488 command_runner.go:130] > [Unit]
	I0327 23:46:26.738051    8488 command_runner.go:130] > Description=Docker Application Container Engine
	I0327 23:46:26.738150    8488 command_runner.go:130] > Documentation=https://docs.docker.com
	I0327 23:46:26.738150    8488 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0327 23:46:26.738150    8488 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0327 23:46:26.738150    8488 command_runner.go:130] > StartLimitBurst=3
	I0327 23:46:26.738150    8488 command_runner.go:130] > StartLimitIntervalSec=60
	I0327 23:46:26.738285    8488 command_runner.go:130] > [Service]
	I0327 23:46:26.738285    8488 command_runner.go:130] > Type=notify
	I0327 23:46:26.738285    8488 command_runner.go:130] > Restart=on-failure
	I0327 23:46:26.738285    8488 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0327 23:46:26.738355    8488 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0327 23:46:26.738355    8488 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0327 23:46:26.738416    8488 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0327 23:46:26.738416    8488 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0327 23:46:26.738416    8488 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0327 23:46:26.738416    8488 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0327 23:46:26.738416    8488 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0327 23:46:26.738492    8488 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0327 23:46:26.738492    8488 command_runner.go:130] > ExecStart=
	I0327 23:46:26.738544    8488 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0327 23:46:26.738544    8488 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0327 23:46:26.738544    8488 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0327 23:46:26.738544    8488 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0327 23:46:26.738544    8488 command_runner.go:130] > LimitNOFILE=infinity
	I0327 23:46:26.738610    8488 command_runner.go:130] > LimitNPROC=infinity
	I0327 23:46:26.738610    8488 command_runner.go:130] > LimitCORE=infinity
	I0327 23:46:26.738610    8488 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0327 23:46:26.738610    8488 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0327 23:46:26.738610    8488 command_runner.go:130] > TasksMax=infinity
	I0327 23:46:26.738668    8488 command_runner.go:130] > TimeoutStartSec=0
	I0327 23:46:26.738668    8488 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0327 23:46:26.738668    8488 command_runner.go:130] > Delegate=yes
	I0327 23:46:26.738668    8488 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0327 23:46:26.738668    8488 command_runner.go:130] > KillMode=process
	I0327 23:46:26.738668    8488 command_runner.go:130] > [Install]
	I0327 23:46:26.738668    8488 command_runner.go:130] > WantedBy=multi-user.target
	I0327 23:46:26.751807    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:46:26.790124    8488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 23:46:26.831305    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 23:46:26.871640    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 23:46:26.903629    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 23:46:26.952192    8488 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0327 23:46:26.966273    8488 ssh_runner.go:195] Run: which cri-dockerd
	I0327 23:46:26.973006    8488 command_runner.go:130] > /usr/bin/cri-dockerd
	I0327 23:46:26.986814    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 23:46:27.006886    8488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 23:46:27.058854    8488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 23:46:27.360460    8488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 23:46:27.658532    8488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 23:46:27.658821    8488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 23:46:27.712890    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:46:28.015424    8488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 23:46:41.025302    8488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.0098052s)
	I0327 23:46:41.038301    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 23:46:41.078311    8488 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 23:46:41.133965    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 23:46:41.178931    8488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 23:46:41.424903    8488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 23:46:41.654890    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:46:41.893896    8488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 23:46:41.940106    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 23:46:41.977472    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:46:42.211105    8488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 23:46:42.340252    8488 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 23:46:42.354642    8488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 23:46:42.365431    8488 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0327 23:46:42.365516    8488 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0327 23:46:42.365516    8488 command_runner.go:130] > Device: 0,22	Inode: 1447        Links: 1
	I0327 23:46:42.365516    8488 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0327 23:46:42.365516    8488 command_runner.go:130] > Access: 2024-03-27 23:46:42.345581089 +0000
	I0327 23:46:42.365516    8488 command_runner.go:130] > Modify: 2024-03-27 23:46:42.244538112 +0000
	I0327 23:46:42.365516    8488 command_runner.go:130] > Change: 2024-03-27 23:46:42.249540240 +0000
	I0327 23:46:42.365516    8488 command_runner.go:130] >  Birth: -
	I0327 23:46:42.365516    8488 start.go:562] Will wait 60s for crictl version
	I0327 23:46:42.378652    8488 ssh_runner.go:195] Run: which crictl
	I0327 23:46:42.390135    8488 command_runner.go:130] > /usr/bin/crictl
	I0327 23:46:42.402527    8488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 23:46:42.502498    8488 command_runner.go:130] > Version:  0.1.0
	I0327 23:46:42.502977    8488 command_runner.go:130] > RuntimeName:  docker
	I0327 23:46:42.502977    8488 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0327 23:46:42.502977    8488 command_runner.go:130] > RuntimeApiVersion:  v1
	I0327 23:46:42.504496    8488 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0327 23:46:42.516477    8488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 23:46:42.560222    8488 command_runner.go:130] > 26.0.0
	I0327 23:46:42.569761    8488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 23:46:42.604384    8488 command_runner.go:130] > 26.0.0
	I0327 23:46:42.610859    8488 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0327 23:46:42.611862    8488 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0327 23:46:42.616882    8488 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0327 23:46:42.616882    8488 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0327 23:46:42.616882    8488 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0327 23:46:42.616882    8488 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0327 23:46:42.619847    8488 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0327 23:46:42.619847    8488 ip.go:210] interface addr: 172.28.224.1/20
	I0327 23:46:42.632847    8488 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0327 23:46:42.639595    8488 command_runner.go:130] > 172.28.224.1	host.minikube.internal
	I0327 23:46:42.639595    8488 kubeadm.go:877] updating cluster {Name:functional-848700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.29.3 ClusterName:functional-848700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.236.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0327 23:46:42.640141    8488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 23:46:42.650640    8488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 23:46:42.682829    8488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0327 23:46:42.682890    8488 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0327 23:46:42.682890    8488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:46:42.682890    8488 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 23:46:42.682890    8488 docker.go:615] Images already preloaded, skipping extraction
	I0327 23:46:42.693491    8488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0327 23:46:42.727244    8488 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0327 23:46:42.727244    8488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:46:42.727244    8488 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 23:46:42.727244    8488 cache_images.go:84] Images are preloaded, skipping loading
	I0327 23:46:42.727244    8488 kubeadm.go:928] updating node { 172.28.236.250 8441 v1.29.3 docker true true} ...
	I0327 23:46:42.727244    8488 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-848700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.236.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:functional-848700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 23:46:42.738631    8488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 23:46:42.777127    8488 command_runner.go:130] > cgroupfs
	I0327 23:46:42.777127    8488 cni.go:84] Creating CNI manager for ""
	I0327 23:46:42.777127    8488 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:46:42.777127    8488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 23:46:42.777127    8488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.236.250 APIServerPort:8441 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-848700 NodeName:functional-848700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.236.250"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.236.250 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 23:46:42.777127    8488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.236.250
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-848700"
	  kubeletExtraArgs:
	    node-ip: 172.28.236.250
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.236.250"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 23:46:42.790126    8488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0327 23:46:42.809738    8488 command_runner.go:130] > kubeadm
	I0327 23:46:42.809738    8488 command_runner.go:130] > kubectl
	I0327 23:46:42.809738    8488 command_runner.go:130] > kubelet
	I0327 23:46:42.810109    8488 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 23:46:42.823585    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 23:46:42.843759    8488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0327 23:46:42.879301    8488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 23:46:42.913073    8488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0327 23:46:42.961439    8488 ssh_runner.go:195] Run: grep 172.28.236.250	control-plane.minikube.internal$ /etc/hosts
	I0327 23:46:42.968211    8488 command_runner.go:130] > 172.28.236.250	control-plane.minikube.internal
	I0327 23:46:42.981031    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:46:43.240823    8488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:46:43.271950    8488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700 for IP: 172.28.236.250
	I0327 23:46:43.272010    8488 certs.go:194] generating shared ca certs ...
	I0327 23:46:43.272010    8488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:46:43.272582    8488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0327 23:46:43.273072    8488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0327 23:46:43.273330    8488 certs.go:256] generating profile certs ...
	I0327 23:46:43.273615    8488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.key
	I0327 23:46:43.274204    8488 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\apiserver.key.95d36a81
	I0327 23:46:43.274204    8488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\proxy-client.key
	I0327 23:46:43.274204    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0327 23:46:43.274839    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0327 23:46:43.275071    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0327 23:46:43.275356    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0327 23:46:43.275466    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0327 23:46:43.275466    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0327 23:46:43.275466    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0327 23:46:43.275466    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0327 23:46:43.276738    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0327 23:46:43.276936    8488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0327 23:46:43.276936    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0327 23:46:43.276936    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0327 23:46:43.277580    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0327 23:46:43.277896    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0327 23:46:43.278019    8488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0327 23:46:43.278019    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0327 23:46:43.278851    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0327 23:46:43.279070    8488 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:46:43.280500    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 23:46:43.329043    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0327 23:46:43.382501    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 23:46:43.435795    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 23:46:43.488490    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0327 23:46:43.536276    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 23:46:43.591228    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 23:46:43.640455    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0327 23:46:43.693803    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0327 23:46:43.746400    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0327 23:46:43.799393    8488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 23:46:43.846628    8488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 23:46:43.897375    8488 ssh_runner.go:195] Run: openssl version
	I0327 23:46:43.906399    8488 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0327 23:46:43.919205    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 23:46:43.952242    8488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:46:43.958528    8488 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:46:43.958958    8488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:46:43.971747    8488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 23:46:43.983066    8488 command_runner.go:130] > b5213941
	I0327 23:46:43.996129    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 23:46:44.024699    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0327 23:46:44.069942    8488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0327 23:46:44.078034    8488 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0327 23:46:44.078034    8488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0327 23:46:44.094026    8488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0327 23:46:44.104270    8488 command_runner.go:130] > 51391683
	I0327 23:46:44.116373    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0327 23:46:44.147728    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0327 23:46:44.182667    8488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0327 23:46:44.189765    8488 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0327 23:46:44.189951    8488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0327 23:46:44.202537    8488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0327 23:46:44.211428    8488 command_runner.go:130] > 3ec20f2e
	I0327 23:46:44.224778    8488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 23:46:44.260640    8488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:46:44.273055    8488 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 23:46:44.273055    8488 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0327 23:46:44.273055    8488 command_runner.go:130] > Device: 8,1	Inode: 7337264     Links: 1
	I0327 23:46:44.273055    8488 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0327 23:46:44.273055    8488 command_runner.go:130] > Access: 2024-03-27 23:43:52.867500892 +0000
	I0327 23:46:44.273172    8488 command_runner.go:130] > Modify: 2024-03-27 23:43:52.867500892 +0000
	I0327 23:46:44.273172    8488 command_runner.go:130] > Change: 2024-03-27 23:43:52.867500892 +0000
	I0327 23:46:44.273172    8488 command_runner.go:130] >  Birth: 2024-03-27 23:43:52.867500892 +0000
	I0327 23:46:44.286119    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 23:46:44.295372    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.306592    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 23:46:44.316024    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.329403    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 23:46:44.338703    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.352280    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 23:46:44.361561    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.373843    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 23:46:44.383404    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.398154    8488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 23:46:44.407374    8488 command_runner.go:130] > Certificate will not expire
	I0327 23:46:44.407879    8488 kubeadm.go:391] StartCluster: {Name:functional-848700 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:functional-848700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.236.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:46:44.420273    8488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 23:46:44.459745    8488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0327 23:46:44.482306    8488 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0327 23:46:44.482306    8488 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0327 23:46:44.482306    8488 command_runner.go:130] > /var/lib/minikube/etcd:
	I0327 23:46:44.482460    8488 command_runner.go:130] > member
	W0327 23:46:44.482530    8488 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 23:46:44.482530    8488 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 23:46:44.482530    8488 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 23:46:44.497446    8488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 23:46:44.518971    8488 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 23:46:44.521403    8488 kubeconfig.go:125] found "functional-848700" server: "https://172.28.236.250:8441"
	I0327 23:46:44.523276    8488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:46:44.524269    8488 kapi.go:59] client config for functional-848700: &rest.Config{Host:"https://172.28.236.250:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-848700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-848700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 23:46:44.525815    8488 cert_rotation.go:137] Starting client certificate rotation controller
	I0327 23:46:44.537742    8488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 23:46:44.559545    8488 kubeadm.go:624] The running cluster does not require reconfiguration: 172.28.236.250
	I0327 23:46:44.559708    8488 kubeadm.go:1154] stopping kube-system containers ...
	I0327 23:46:44.571029    8488 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 23:46:44.605528    8488 command_runner.go:130] > 8446d864143c
	I0327 23:46:44.605528    8488 command_runner.go:130] > 7207097dd1ab
	I0327 23:46:44.605528    8488 command_runner.go:130] > f46b0aee9518
	I0327 23:46:44.605528    8488 command_runner.go:130] > 0e76b3ceb285
	I0327 23:46:44.605528    8488 command_runner.go:130] > 9e8bea3bcc4e
	I0327 23:46:44.605528    8488 command_runner.go:130] > 9000ea8c0bbc
	I0327 23:46:44.605528    8488 command_runner.go:130] > 42ef5b0003c2
	I0327 23:46:44.605528    8488 command_runner.go:130] > 5f04c49c6fd3
	I0327 23:46:44.605528    8488 command_runner.go:130] > 69f1635a58fd
	I0327 23:46:44.605528    8488 command_runner.go:130] > 8500b7ce7c19
	I0327 23:46:44.605528    8488 command_runner.go:130] > 9ae65303a3e0
	I0327 23:46:44.605528    8488 command_runner.go:130] > 07f0c4c15173
	I0327 23:46:44.605528    8488 command_runner.go:130] > 8aad4eaab9c5
	I0327 23:46:44.605528    8488 command_runner.go:130] > bcd5d1eb7fb6
	I0327 23:46:44.605528    8488 docker.go:483] Stopping containers: [8446d864143c 7207097dd1ab f46b0aee9518 0e76b3ceb285 9e8bea3bcc4e 9000ea8c0bbc 42ef5b0003c2 5f04c49c6fd3 69f1635a58fd 8500b7ce7c19 9ae65303a3e0 07f0c4c15173 8aad4eaab9c5 bcd5d1eb7fb6]
	I0327 23:46:44.616394    8488 ssh_runner.go:195] Run: docker stop 8446d864143c 7207097dd1ab f46b0aee9518 0e76b3ceb285 9e8bea3bcc4e 9000ea8c0bbc 42ef5b0003c2 5f04c49c6fd3 69f1635a58fd 8500b7ce7c19 9ae65303a3e0 07f0c4c15173 8aad4eaab9c5 bcd5d1eb7fb6
	I0327 23:46:44.646417    8488 command_runner.go:130] > 8446d864143c
	I0327 23:46:44.647194    8488 command_runner.go:130] > 7207097dd1ab
	I0327 23:46:44.647194    8488 command_runner.go:130] > f46b0aee9518
	I0327 23:46:44.647194    8488 command_runner.go:130] > 0e76b3ceb285
	I0327 23:46:44.647285    8488 command_runner.go:130] > 9e8bea3bcc4e
	I0327 23:46:44.647285    8488 command_runner.go:130] > 9000ea8c0bbc
	I0327 23:46:44.647285    8488 command_runner.go:130] > 42ef5b0003c2
	I0327 23:46:44.647285    8488 command_runner.go:130] > 5f04c49c6fd3
	I0327 23:46:44.647285    8488 command_runner.go:130] > 69f1635a58fd
	I0327 23:46:44.647285    8488 command_runner.go:130] > 8500b7ce7c19
	I0327 23:46:44.647285    8488 command_runner.go:130] > 9ae65303a3e0
	I0327 23:46:44.647285    8488 command_runner.go:130] > 07f0c4c15173
	I0327 23:46:44.647285    8488 command_runner.go:130] > 8aad4eaab9c5
	I0327 23:46:44.647285    8488 command_runner.go:130] > bcd5d1eb7fb6
	I0327 23:46:44.664837    8488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 23:46:44.743385    8488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 23:46:44.765109    8488 command_runner.go:130] > -rw------- 1 root root 5651 Mar 27 23:43 /etc/kubernetes/admin.conf
	I0327 23:46:44.765109    8488 command_runner.go:130] > -rw------- 1 root root 5654 Mar 27 23:43 /etc/kubernetes/controller-manager.conf
	I0327 23:46:44.765109    8488 command_runner.go:130] > -rw------- 1 root root 2007 Mar 27 23:44 /etc/kubernetes/kubelet.conf
	I0327 23:46:44.765109    8488 command_runner.go:130] > -rw------- 1 root root 5606 Mar 27 23:43 /etc/kubernetes/scheduler.conf
	I0327 23:46:44.765109    8488 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 Mar 27 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Mar 27 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Mar 27 23:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Mar 27 23:43 /etc/kubernetes/scheduler.conf
	
	I0327 23:46:44.779317    8488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0327 23:46:44.803481    8488 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0327 23:46:44.816867    8488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0327 23:46:44.836716    8488 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0327 23:46:44.848710    8488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0327 23:46:44.869151    8488 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 23:46:44.881895    8488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 23:46:44.915731    8488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0327 23:46:44.934148    8488 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 23:46:44.948025    8488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 23:46:44.986751    8488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 23:46:45.007436    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:45.097674    8488 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 23:46:45.097674    8488 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0327 23:46:45.097674    8488 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 23:46:45.097794    8488 command_runner.go:130] > [certs] Using the existing "sa" key
	I0327 23:46:45.098168    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:46.743217    8488 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 23:46:46.744292    8488 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0327 23:46:46.744292    8488 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0327 23:46:46.744292    8488 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0327 23:46:46.744292    8488 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 23:46:46.744292    8488 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 23:46:46.744292    8488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6461149s)
	I0327 23:46:46.744292    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:47.097466    8488 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 23:46:47.097553    8488 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 23:46:47.097553    8488 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0327 23:46:47.097553    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:47.199057    8488 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 23:46:47.199642    8488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 23:46:47.199642    8488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 23:46:47.199642    8488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 23:46:47.199703    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:47.342974    8488 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 23:46:47.343601    8488 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:46:47.357058    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:46:47.863128    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:46:48.362726    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:46:48.871406    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:46:49.362629    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:46:49.392571    8488 command_runner.go:130] > 6556
	I0327 23:46:49.392571    8488 api_server.go:72] duration metric: took 2.0490159s to wait for apiserver process to appear ...
	I0327 23:46:49.392571    8488 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:46:49.392571    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:52.996866    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0327 23:46:52.996866    8488 api_server.go:103] status: https://172.28.236.250:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0327 23:46:52.997296    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:53.057810    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0327 23:46:53.057878    8488 api_server.go:103] status: https://172.28.236.250:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0327 23:46:53.407005    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:53.415880    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 23:46:53.415880    8488 api_server.go:103] status: https://172.28.236.250:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 23:46:53.900360    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:53.909875    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 23:46:53.909914    8488 api_server.go:103] status: https://172.28.236.250:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 23:46:54.408825    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:54.427555    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0327 23:46:54.427612    8488 api_server.go:103] status: https://172.28.236.250:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0327 23:46:54.901211    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:46:54.920890    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 200:
	ok
	I0327 23:46:54.921333    8488 round_trippers.go:463] GET https://172.28.236.250:8441/version
	I0327 23:46:54.921427    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:54.921479    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:54.921479    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:54.936965    8488 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0327 23:46:54.936965    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:54.936965    8488 round_trippers.go:580]     Audit-Id: 11b222a4-2a86-4bfc-903a-fae06c516053
	I0327 23:46:54.936965    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:54.936965    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:54.937135    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:54.937135    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:54.937135    8488 round_trippers.go:580]     Content-Length: 263
	I0327 23:46:54.937135    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:54 GMT
	I0327 23:46:54.937265    8488 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0327 23:46:54.937391    8488 api_server.go:141] control plane version: v1.29.3
	I0327 23:46:54.937456    8488 api_server.go:131] duration metric: took 5.5448533s to wait for apiserver health ...
	I0327 23:46:54.937548    8488 cni.go:84] Creating CNI manager for ""
	I0327 23:46:54.937548    8488 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:46:54.944668    8488 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 23:46:54.962404    8488 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 23:46:54.984667    8488 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 23:46:55.033418    8488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:46:55.033742    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:46:55.033788    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.033811    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.033811    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.047219    8488 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0327 23:46:55.047219    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.047295    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.047295    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.047295    8488 round_trippers.go:580]     Audit-Id: 5844e3ee-12f9-4b22-832b-77670cf135a1
	I0327 23:46:55.047349    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.047349    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.047349    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.048445    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"506"},"items":[{"metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"500","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51568 chars]
	I0327 23:46:55.054634    8488 system_pods.go:59] 7 kube-system pods found
	I0327 23:46:55.054765    8488 system_pods.go:61] "coredns-76f75df574-kl22d" [68395922-8215-40eb-ba25-a66d3a484a61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0327 23:46:55.054765    8488 system_pods.go:61] "etcd-functional-848700" [05354af6-6cc0-48a9-899a-aba82a561744] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0327 23:46:55.054765    8488 system_pods.go:61] "kube-apiserver-functional-848700" [e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0327 23:46:55.054818    8488 system_pods.go:61] "kube-controller-manager-functional-848700" [b43977b6-8078-475c-b311-a70a0e45e1e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0327 23:46:55.054854    8488 system_pods.go:61] "kube-proxy-njwdc" [862af240-aef4-4288-818c-2a9a96564cba] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0327 23:46:55.054854    8488 system_pods.go:61] "kube-scheduler-functional-848700" [e3288622-5a80-4186-aedf-e189cadec8fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0327 23:46:55.054854    8488 system_pods.go:61] "storage-provisioner" [b22b2f6c-1e15-4539-9cec-25649ec63e34] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0327 23:46:55.054854    8488 system_pods.go:74] duration metric: took 21.3747ms to wait for pod list to return data ...
	I0327 23:46:55.054926    8488 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:46:55.055070    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes
	I0327 23:46:55.055070    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.055125    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.055125    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.068130    8488 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0327 23:46:55.068130    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.068130    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.068130    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.068130    8488 round_trippers.go:580]     Audit-Id: 4737a38c-0a9e-42e4-86de-01e2b117d53f
	I0327 23:46:55.068130    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.068130    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.068130    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.070389    8488 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"506"},"items":[{"metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4848 chars]
	I0327 23:46:55.071767    8488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:46:55.071821    8488 node_conditions.go:123] node cpu capacity is 2
	I0327 23:46:55.071887    8488 node_conditions.go:105] duration metric: took 16.9609ms to run NodePressure ...
	I0327 23:46:55.071887    8488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 23:46:55.567442    8488 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0327 23:46:55.567442    8488 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0327 23:46:55.567442    8488 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0327 23:46:55.567771    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0327 23:46:55.567771    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.567985    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.567985    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.573352    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:46:55.573352    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.573352    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.573352    8488 round_trippers.go:580]     Audit-Id: d83782d5-9f75-4ce7-bb8c-8146113014ca
	I0327 23:46:55.573785    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.573785    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.573785    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.573785    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.575378    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30988 chars]
	I0327 23:46:55.576945    8488 kubeadm.go:733] kubelet initialised
	I0327 23:46:55.577568    8488 kubeadm.go:734] duration metric: took 9.9887ms waiting for restarted kubelet to initialise ...
	I0327 23:46:55.577756    8488 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:46:55.577756    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:46:55.577756    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.577756    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.577756    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.582692    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:46:55.582692    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.582692    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.583125    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.583125    8488 round_trippers.go:580]     Audit-Id: 67a25745-1898-40d8-8a57-5ef108d7c5ea
	I0327 23:46:55.583125    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.583125    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.583125    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.584631    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"500","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51568 chars]
	I0327 23:46:55.587129    8488 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kl22d" in "kube-system" namespace to be "Ready" ...
	I0327 23:46:55.587231    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:55.587334    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.587382    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.587382    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.590712    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:46:55.590712    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.591722    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.591722    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.591722    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.591722    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.591774    8488 round_trippers.go:580]     Audit-Id: ba172e64-9cbd-4b3d-b11e-c273c2f87695
	I0327 23:46:55.591774    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.592082    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"500","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6505 chars]
	I0327 23:46:55.592718    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:55.592718    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:55.592824    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:55.592824    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:55.595303    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:46:55.595842    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:55.595842    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:55.595842    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:55 GMT
	I0327 23:46:55.595903    8488 round_trippers.go:580]     Audit-Id: 68664249-c909-4f01-9bcf-1e7ba6a09ca8
	I0327 23:46:55.595903    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:55.595903    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:55.595903    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:55.596291    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:56.101220    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:56.101286    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:56.101286    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:56.101348    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:56.108092    8488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:46:56.109145    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:56.109145    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:56 GMT
	I0327 23:46:56.109145    8488 round_trippers.go:580]     Audit-Id: 3ce10f8f-6c2d-4501-b482-c557bedf3995
	I0327 23:46:56.109145    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:56.109145    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:56.109145    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:56.109145    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:56.109322    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:56.110115    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:56.110115    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:56.110115    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:56.110115    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:56.121061    8488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:46:56.121061    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:56.121061    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:56.121061    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:56.121061    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:56 GMT
	I0327 23:46:56.121061    8488 round_trippers.go:580]     Audit-Id: 0a908bde-683e-4dc2-a3c4-6cd5e5487a1f
	I0327 23:46:56.121061    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:56.121061    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:56.121061    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:56.595332    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:56.595396    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:56.595396    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:56.595396    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:56.599048    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:46:56.599435    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:56.599435    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:56.599506    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:56 GMT
	I0327 23:46:56.599506    8488 round_trippers.go:580]     Audit-Id: 5bb2a551-6a2c-40b1-a3ca-333cebdc946a
	I0327 23:46:56.599506    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:56.599506    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:56.599506    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:56.599797    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:56.600503    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:56.600623    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:56.600623    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:56.600681    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:56.616300    8488 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0327 23:46:56.616375    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:56.616375    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:56.616375    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:56 GMT
	I0327 23:46:56.616375    8488 round_trippers.go:580]     Audit-Id: e698ab4a-5c3d-4ecb-a6b5-52fbf2bc3e9c
	I0327 23:46:56.616375    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:56.616503    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:56.616503    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:56.616565    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:57.094662    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:57.094901    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:57.094901    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:57.094901    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:57.099406    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:46:57.099497    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:57.099497    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:57.099497    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:57 GMT
	I0327 23:46:57.099497    8488 round_trippers.go:580]     Audit-Id: f08d6e2f-5cef-4294-942f-7c2dcb2d3b5b
	I0327 23:46:57.099497    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:57.099562    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:57.099562    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:57.100268    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:57.101081    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:57.101081    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:57.101081    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:57.101081    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:57.106755    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:46:57.106755    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:57.106755    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:57 GMT
	I0327 23:46:57.106755    8488 round_trippers.go:580]     Audit-Id: 93d4f807-abc7-4eef-85bb-6e3e8b9a06a3
	I0327 23:46:57.106755    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:57.106755    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:57.106755    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:57.106755    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:57.107537    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:57.594364    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:57.594364    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:57.594364    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:57.594364    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:57.597985    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:46:57.597985    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:57.597985    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:57.597985    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:57.598251    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:57 GMT
	I0327 23:46:57.598251    8488 round_trippers.go:580]     Audit-Id: 270c67d6-cf35-422e-ae8a-cbd3cc985ee9
	I0327 23:46:57.598251    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:57.598251    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:57.598351    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:57.599330    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:57.599402    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:57.599402    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:57.599402    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:57.601952    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:46:57.601952    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:57.601952    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:57.601952    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:57.601952    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:57 GMT
	I0327 23:46:57.601952    8488 round_trippers.go:580]     Audit-Id: ac9ce21b-2afa-40e4-b5b3-5043c08e82b8
	I0327 23:46:57.601952    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:57.601952    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:57.603293    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:57.603811    8488 pod_ready.go:102] pod "coredns-76f75df574-kl22d" in "kube-system" namespace has status "Ready":"False"
	I0327 23:46:58.093376    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:58.093376    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:58.093456    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:58.093456    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:58.097675    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:46:58.097675    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:58.098620    8488 round_trippers.go:580]     Audit-Id: f1595f91-a1d5-427d-bfd4-bfc985fd5b4b
	I0327 23:46:58.098620    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:58.098620    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:58.098620    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:58.098620    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:58.098620    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:58 GMT
	I0327 23:46:58.098620    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:58.099837    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:58.099837    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:58.099837    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:58.099837    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:58.102450    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:46:58.102450    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:58.102450    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:58.102450    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:58 GMT
	I0327 23:46:58.102450    8488 round_trippers.go:580]     Audit-Id: 4e818b21-83d1-4e61-b5c4-6fddd5a61738
	I0327 23:46:58.102450    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:58.102450    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:58.102450    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:58.103871    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:58.592550    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:58.592550    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:58.592550    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:58.592550    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:58.598495    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:46:58.598495    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:58.599159    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:58.599214    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:58 GMT
	I0327 23:46:58.599214    8488 round_trippers.go:580]     Audit-Id: 71639749-315a-46d5-bcf7-937d9f540aaf
	I0327 23:46:58.599214    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:58.599214    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:58.599214    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:58.599214    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:58.600345    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:58.600345    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:58.600345    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:58.600345    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:58.606662    8488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:46:58.606662    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:58.606662    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:58.606662    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:58.606662    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:58.606662    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:58.606662    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:58 GMT
	I0327 23:46:58.606662    8488 round_trippers.go:580]     Audit-Id: b6d38e42-568c-4644-af47-eac5368b5946
	I0327 23:46:58.606960    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:59.090137    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:59.090137    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.090137    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.090137    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.095692    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:46:59.095692    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.095692    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.095692    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.096635    8488 round_trippers.go:580]     Audit-Id: 0ea520bc-79c1-4547-93de-b1bd57fde610
	I0327 23:46:59.096635    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.096739    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.096739    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.097013    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"510","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6681 chars]
	I0327 23:46:59.097865    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:59.097930    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.097930    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.097930    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.100269    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:46:59.100269    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.100269    8488 round_trippers.go:580]     Audit-Id: a9b99d8a-4c01-4335-a2d0-2d4dee4c79bd
	I0327 23:46:59.100269    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.100269    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.100840    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.100840    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.100840    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.100927    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:59.590654    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:46:59.590654    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.591012    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.591012    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.595327    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:46:59.595327    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.595327    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.595327    8488 round_trippers.go:580]     Audit-Id: a21a7e14-9b80-473a-ab0e-ffbf148eb116
	I0327 23:46:59.595327    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.595327    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.595327    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.595327    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.596132    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"565","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0327 23:46:59.596385    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:59.596385    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.596385    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.596385    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.604120    8488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0327 23:46:59.604120    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.604120    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.604120    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.604120    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.604120    8488 round_trippers.go:580]     Audit-Id: 7a0c392a-c9fe-4040-bca7-838bebadf0e8
	I0327 23:46:59.604120    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.604120    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.604759    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:46:59.605337    8488 pod_ready.go:92] pod "coredns-76f75df574-kl22d" in "kube-system" namespace has status "Ready":"True"
	I0327 23:46:59.605337    8488 pod_ready.go:81] duration metric: took 4.0181862s for pod "coredns-76f75df574-kl22d" in "kube-system" namespace to be "Ready" ...
	I0327 23:46:59.605337    8488 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:46:59.605495    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:46:59.605495    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.605495    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.605495    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.609299    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:46:59.609299    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.609299    8488 round_trippers.go:580]     Audit-Id: 215239b3-9b2c-493d-a7a0-bbcea422b7d8
	I0327 23:46:59.609299    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.609299    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.609299    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.609299    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.609299    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.609299    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:46:59.610337    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:46:59.610337    8488 round_trippers.go:469] Request Headers:
	I0327 23:46:59.610337    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:46:59.610337    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:46:59.613147    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:46:59.614172    8488 round_trippers.go:577] Response Headers:
	I0327 23:46:59.614172    8488 round_trippers.go:580]     Audit-Id: 6da458ad-e974-4276-b2d7-88193a6bc55b
	I0327 23:46:59.614172    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:46:59.614172    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:46:59.614172    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:46:59.614172    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:46:59.614172    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:46:59 GMT
	I0327 23:46:59.614447    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:00.107453    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:00.107549    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:00.107549    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:00.107549    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:00.112499    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:00.112499    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:00.112499    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:00 GMT
	I0327 23:47:00.112499    8488 round_trippers.go:580]     Audit-Id: c545c3ca-50be-40a7-84fa-68458971dd4a
	I0327 23:47:00.112499    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:00.112499    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:00.112594    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:00.112594    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:00.112830    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:00.113717    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:00.113717    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:00.113717    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:00.113717    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:00.118094    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:00.118094    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:00.118094    8488 round_trippers.go:580]     Audit-Id: 16ad44dd-bf17-4676-a8ab-afb72434a957
	I0327 23:47:00.118094    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:00.118094    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:00.118192    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:00.118192    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:00.118192    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:00 GMT
	I0327 23:47:00.118192    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:00.606858    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:00.607196    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:00.607196    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:00.607196    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:00.610803    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:00.611809    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:00.611809    8488 round_trippers.go:580]     Audit-Id: f7000482-bd67-496b-b023-05972f19b012
	I0327 23:47:00.611809    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:00.611809    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:00.611809    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:00.611809    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:00.611809    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:00 GMT
	I0327 23:47:00.612083    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:00.612697    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:00.612829    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:00.612829    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:00.612829    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:00.615805    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:47:00.615805    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:00.616233    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:00 GMT
	I0327 23:47:00.616233    8488 round_trippers.go:580]     Audit-Id: aec8a9e9-1e02-4a2c-818b-3325c10d836e
	I0327 23:47:00.616233    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:00.616233    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:00.616233    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:00.616233    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:00.616542    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:01.120043    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:01.120126    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:01.120126    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:01.120126    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:01.124620    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:01.124620    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:01.124620    8488 round_trippers.go:580]     Audit-Id: 3c3a8ad0-f410-4e0a-91e2-830c01c0884e
	I0327 23:47:01.124620    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:01.124620    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:01.124692    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:01.124692    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:01.124692    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:01 GMT
	I0327 23:47:01.125087    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:01.125937    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:01.125937    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:01.125937    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:01.125937    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:01.128517    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:47:01.128844    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:01.128844    8488 round_trippers.go:580]     Audit-Id: 3a3e36e5-7b3d-4e5b-8506-e867a6ba3f1e
	I0327 23:47:01.128844    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:01.128844    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:01.128844    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:01.128844    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:01.128844    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:01 GMT
	I0327 23:47:01.128844    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:01.619914    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:01.619914    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:01.620192    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:01.620192    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:01.626916    8488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:47:01.626916    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:01.626916    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:01.626916    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:01.626916    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:01.626916    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:01.626916    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:01 GMT
	I0327 23:47:01.626916    8488 round_trippers.go:580]     Audit-Id: 6d9b88d4-9464-4b52-818e-c6bac0b3aed2
	I0327 23:47:01.626916    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:01.627678    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:01.627678    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:01.627678    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:01.627678    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:01.631654    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:01.631654    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:01.631654    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:01 GMT
	I0327 23:47:01.631654    8488 round_trippers.go:580]     Audit-Id: c8efebda-e421-4b46-b0b0-80b740864fc9
	I0327 23:47:01.631654    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:01.631654    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:01.631654    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:01.631654    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:01.631654    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:01.632628    8488 pod_ready.go:102] pod "etcd-functional-848700" in "kube-system" namespace has status "Ready":"False"
	I0327 23:47:02.116094    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:02.116094    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:02.116094    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:02.116094    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:02.120674    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:02.121020    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:02.121020    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:02.121020    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:02 GMT
	I0327 23:47:02.121020    8488 round_trippers.go:580]     Audit-Id: 8524670b-9b8c-4a46-a718-caca12562e5f
	I0327 23:47:02.121020    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:02.121117    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:02.121117    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:02.121344    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:02.121511    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:02.121511    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:02.121511    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:02.121511    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:02.124781    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:02.125365    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:02.125365    8488 round_trippers.go:580]     Audit-Id: fd3e8d79-a606-4702-adf8-8cec079d2900
	I0327 23:47:02.125365    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:02.125365    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:02.125365    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:02.125365    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:02.125365    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:02 GMT
	I0327 23:47:02.125654    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:02.614654    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:02.614654    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:02.614752    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:02.614752    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:02.619673    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:02.619673    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:02.620123    8488 round_trippers.go:580]     Audit-Id: eda02a94-f1e7-4ddb-bb73-148fd88bbad2
	I0327 23:47:02.620123    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:02.620123    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:02.620123    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:02.620123    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:02.620172    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:02 GMT
	I0327 23:47:02.620248    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:02.621065    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:02.621065    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:02.621065    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:02.621065    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:02.624190    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:02.624190    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:02.624190    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:02 GMT
	I0327 23:47:02.624190    8488 round_trippers.go:580]     Audit-Id: 1e36587f-dc4a-4473-9e10-d715c415f1db
	I0327 23:47:02.624190    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:02.624261    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:02.624261    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:02.624261    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:02.624476    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:03.115004    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:03.115067    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:03.115067    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:03.115123    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:03.121053    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:03.121053    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:03.121053    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:03.121053    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:03 GMT
	I0327 23:47:03.121053    8488 round_trippers.go:580]     Audit-Id: aedf0c5c-4268-4685-9e36-b2af1888e30a
	I0327 23:47:03.121053    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:03.121053    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:03.121053    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:03.122696    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:03.123481    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:03.123541    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:03.123541    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:03.123541    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:03.125997    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:47:03.125997    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:03.125997    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:03.125997    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:03.125997    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:03 GMT
	I0327 23:47:03.125997    8488 round_trippers.go:580]     Audit-Id: 9a09e051-a2d5-41ce-a505-f02dfeaa7931
	I0327 23:47:03.125997    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:03.125997    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:03.127247    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:03.614283    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:03.614342    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:03.614401    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:03.614401    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:03.618253    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:03.618253    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:03.618253    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:03.618253    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:03.618253    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:03.618253    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:03 GMT
	I0327 23:47:03.618253    8488 round_trippers.go:580]     Audit-Id: d4622e14-b628-4030-9855-44b292e4a49b
	I0327 23:47:03.618253    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:03.619167    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:03.619167    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:03.619167    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:03.619167    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:03.619167    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:03.623257    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:03.623663    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:03.623663    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:03 GMT
	I0327 23:47:03.623663    8488 round_trippers.go:580]     Audit-Id: 9f00e685-7b64-4fbf-ba79-0698c3b37b75
	I0327 23:47:03.623663    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:03.623663    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:03.623713    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:03.623713    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:03.623751    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:04.117080    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:04.117080    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:04.117080    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:04.117080    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:04.121128    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:04.121128    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:04.121128    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:04.121128    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:04 GMT
	I0327 23:47:04.121128    8488 round_trippers.go:580]     Audit-Id: 84ad9b16-b296-48b6-bcd9-52e96786415b
	I0327 23:47:04.121128    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:04.121839    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:04.121839    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:04.122244    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:04.123105    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:04.123105    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:04.123105    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:04.123105    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:04.127085    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:04.127085    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:04.127085    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:04.127085    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:04.127085    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:04.127085    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:04 GMT
	I0327 23:47:04.127085    8488 round_trippers.go:580]     Audit-Id: d69ff705-5185-403d-8e9c-37689e3f6155
	I0327 23:47:04.127085    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:04.127085    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:04.127892    8488 pod_ready.go:102] pod "etcd-functional-848700" in "kube-system" namespace has status "Ready":"False"
	I0327 23:47:04.615314    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:04.615314    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:04.615314    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:04.615314    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:04.620194    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:04.620194    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:04.620194    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:04.620194    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:04.620194    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:04.620194    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:04 GMT
	I0327 23:47:04.620194    8488 round_trippers.go:580]     Audit-Id: fa8c25ff-5cb4-4a6b-9f64-7cc0ed05bf7c
	I0327 23:47:04.620194    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:04.620194    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:04.621140    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:04.621190    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:04.621190    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:04.621190    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:04.623810    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:47:04.624335    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:04.624335    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:04.624335    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:04.624335    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:04 GMT
	I0327 23:47:04.624424    8488 round_trippers.go:580]     Audit-Id: 12c193c1-ab8a-4783-9913-067e654df79e
	I0327 23:47:04.624424    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:04.624424    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:04.624807    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:05.113184    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:05.113184    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:05.113184    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:05.113184    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:05.117656    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:05.117656    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:05.118173    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:05.118173    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:05.118173    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:05.118173    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:05.118173    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:05 GMT
	I0327 23:47:05.118173    8488 round_trippers.go:580]     Audit-Id: f8af6ae1-d004-413b-b525-ce539c89879a
	I0327 23:47:05.118874    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:05.119615    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:05.119691    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:05.119691    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:05.119691    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:05.122934    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:05.122934    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:05.122934    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:05.122934    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:05.122934    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:05.122934    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:05 GMT
	I0327 23:47:05.122934    8488 round_trippers.go:580]     Audit-Id: 0f32fbf4-814f-451e-a273-ea7d71a3b4fa
	I0327 23:47:05.122934    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:05.122934    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:05.613545    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:05.613864    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:05.613946    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:05.613946    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:05.618319    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:05.619031    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:05.619115    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:05.619196    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:05 GMT
	I0327 23:47:05.619300    8488 round_trippers.go:580]     Audit-Id: 6e3d1d69-b551-4364-bb43-fed18c250346
	I0327 23:47:05.619388    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:05.619513    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:05.619513    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:05.620230    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:05.621054    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:05.621054    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:05.621054    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:05.621054    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:05.625365    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:05.625365    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:05.625365    8488 round_trippers.go:580]     Audit-Id: 3a6d181b-7685-44f7-99ce-9491f9abda30
	I0327 23:47:05.625365    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:05.625365    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:05.625365    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:05.625365    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:05.625365    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:05 GMT
	I0327 23:47:05.625365    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:06.112267    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:06.112334    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.112334    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.112334    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.122821    8488 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0327 23:47:06.122821    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.122821    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.123103    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.123103    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.123103    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.123103    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.123103    8488 round_trippers.go:580]     Audit-Id: 5508077b-4b32-4c1a-a3cc-ec2fb35fbc3e
	I0327 23:47:06.129821    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"501","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6608 chars]
	I0327 23:47:06.131596    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:06.131596    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.131596    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.131596    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.137818    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:06.138193    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.138193    8488 round_trippers.go:580]     Audit-Id: 124d9541-e483-41dd-aa58-0822c72a5496
	I0327 23:47:06.138193    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.138193    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.138193    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.138193    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.138193    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.138539    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:06.138780    8488 pod_ready.go:102] pod "etcd-functional-848700" in "kube-system" namespace has status "Ready":"False"
	I0327 23:47:06.606241    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:06.606324    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.606324    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.606420    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.610066    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:06.610066    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.610066    8488 round_trippers.go:580]     Audit-Id: c1d71a57-6203-496e-b258-2a1cabddf650
	I0327 23:47:06.610066    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.610066    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.610066    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.610066    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.610066    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.610746    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"575","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0327 23:47:06.611813    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:06.611871    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.611871    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.611871    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.615109    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:06.615281    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.615281    8488 round_trippers.go:580]     Audit-Id: a5ed5574-67d9-4798-9006-664db1f88ffd
	I0327 23:47:06.615281    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.615281    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.615281    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.615281    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.615281    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.615613    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:06.616032    8488 pod_ready.go:92] pod "etcd-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:06.616032    8488 pod_ready.go:81] duration metric: took 7.0106556s for pod "etcd-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:06.616032    8488 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:06.616032    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-848700
	I0327 23:47:06.616032    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.616032    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.616032    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.622720    8488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:47:06.622783    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.622809    8488 round_trippers.go:580]     Audit-Id: a25ad545-28df-4285-8dbf-6c2a49908c04
	I0327 23:47:06.622809    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.622809    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.622809    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.622809    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.622809    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.623569    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-848700","namespace":"kube-system","uid":"e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b","resourceVersion":"502","creationTimestamp":"2024-03-27T23:44:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.236.250:8441","kubernetes.io/config.hash":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.mirror":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.seen":"2024-03-27T23:43:57.947501680Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0327 23:47:06.624299    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:06.624299    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:06.624299    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:06.624299    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:06.626992    8488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0327 23:47:06.626992    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:06.626992    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:06.626992    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:06.626992    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:06 GMT
	I0327 23:47:06.626992    8488 round_trippers.go:580]     Audit-Id: 93d57b8c-87a2-49b5-ae8a-7bbc5a2f9764
	I0327 23:47:06.626992    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:06.626992    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:06.626992    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.120291    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-848700
	I0327 23:47:07.120291    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.120291    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.120291    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.124892    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:07.124892    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.124892    8488 round_trippers.go:580]     Audit-Id: 62c80dd3-8f8c-4c10-9ce9-12a060f915f7
	I0327 23:47:07.124892    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.124892    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.124892    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.124892    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.124892    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.125817    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-848700","namespace":"kube-system","uid":"e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b","resourceVersion":"502","creationTimestamp":"2024-03-27T23:44:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.236.250:8441","kubernetes.io/config.hash":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.mirror":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.seen":"2024-03-27T23:43:57.947501680Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 8158 chars]
	I0327 23:47:07.126426    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:07.126426    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.126426    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.126426    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.130001    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:07.130001    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.130001    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.130001    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.130001    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.130630    8488 round_trippers.go:580]     Audit-Id: 090d96b1-4e72-4cc9-a27e-0fd8c9eae395
	I0327 23:47:07.130630    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.130630    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.130719    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.619538    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-848700
	I0327 23:47:07.619538    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.619538    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.619538    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.624493    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:07.624493    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.624493    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.624493    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.624493    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.624493    8488 round_trippers.go:580]     Audit-Id: 14e238f3-9a66-41e0-90b6-ecee650a9fe0
	I0327 23:47:07.624493    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.624493    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.624967    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-848700","namespace":"kube-system","uid":"e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b","resourceVersion":"579","creationTimestamp":"2024-03-27T23:44:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.236.250:8441","kubernetes.io/config.hash":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.mirror":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.seen":"2024-03-27T23:43:57.947501680Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0327 23:47:07.625612    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:07.625612    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.625612    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.625612    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.630743    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:07.630858    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.630858    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.630858    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.630858    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.630858    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.630858    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.630858    8488 round_trippers.go:580]     Audit-Id: 8b10eaa8-b205-45a2-8d57-50e9648f5c58
	I0327 23:47:07.630858    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.631464    8488 pod_ready.go:92] pod "kube-apiserver-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:07.631562    8488 pod_ready.go:81] duration metric: took 1.0155235s for pod "kube-apiserver-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.631562    8488 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.631649    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-848700
	I0327 23:47:07.631702    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.631702    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.631702    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.635386    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:07.635488    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.635488    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.635488    8488 round_trippers.go:580]     Audit-Id: f49ef3fa-e471-4a72-9938-0bb0d74d89e8
	I0327 23:47:07.635488    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.635488    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.635488    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.635488    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.636374    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-848700","namespace":"kube-system","uid":"b43977b6-8078-475c-b311-a70a0e45e1e0","resourceVersion":"567","creationTimestamp":"2024-03-27T23:44:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4ae8c14ba28e2accfb7447b00af64be","kubernetes.io/config.mirror":"a4ae8c14ba28e2accfb7447b00af64be","kubernetes.io/config.seen":"2024-03-27T23:43:57.947502881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0327 23:47:07.636971    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:07.637046    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.637090    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.637090    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.640124    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:07.640295    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.640295    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.640295    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.640295    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.640295    8488 round_trippers.go:580]     Audit-Id: 5df5edd4-820f-4b9f-be85-5e43cc3e5fee
	I0327 23:47:07.640295    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.640295    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.640492    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.640606    8488 pod_ready.go:92] pod "kube-controller-manager-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:07.640606    8488 pod_ready.go:81] duration metric: took 9.0443ms for pod "kube-controller-manager-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.640606    8488 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-njwdc" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.640606    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njwdc
	I0327 23:47:07.640606    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.640606    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.640606    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.644644    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:07.644672    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.644725    8488 round_trippers.go:580]     Audit-Id: 70538f2c-7d43-4857-934d-e91d3353ce92
	I0327 23:47:07.644725    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.644725    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.644725    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.644725    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.644725    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.646461    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-njwdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"862af240-aef4-4288-818c-2a9a96564cba","resourceVersion":"511","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"206efe0c-bd7c-41b7-b05d-e0c93e79b5d7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"206efe0c-bd7c-41b7-b05d-e0c93e79b5d7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0327 23:47:07.646680    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:07.646680    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.646680    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.646680    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.654873    8488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 23:47:07.654873    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.654873    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.654873    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.654873    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.654873    8488 round_trippers.go:580]     Audit-Id: 97a10ae7-3e0c-4f06-a819-b96814a36668
	I0327 23:47:07.655359    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.655411    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.656693    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.657587    8488 pod_ready.go:92] pod "kube-proxy-njwdc" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:07.657670    8488 pod_ready.go:81] duration metric: took 17.0639ms for pod "kube-proxy-njwdc" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.657700    8488 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.657758    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-848700
	I0327 23:47:07.657817    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.657817    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.657817    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.659796    8488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:47:07.659796    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.659796    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.660698    8488 round_trippers.go:580]     Audit-Id: 8af5161f-896e-476f-b1e3-44b116b31ee1
	I0327 23:47:07.660698    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.660698    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.660698    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.660698    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.660698    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-848700","namespace":"kube-system","uid":"e3288622-5a80-4186-aedf-e189cadec8fb","resourceVersion":"571","creationTimestamp":"2024-03-27T23:44:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e65ee72f09147182b6adbe0f82ab216","kubernetes.io/config.mirror":"8e65ee72f09147182b6adbe0f82ab216","kubernetes.io/config.seen":"2024-03-27T23:44:08.711335826Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0327 23:47:07.660698    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:07.660698    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:07.660698    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:07.660698    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:07.664753    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:07.664753    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:07.664753    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:07.664753    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:07.664753    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:07 GMT
	I0327 23:47:07.664753    8488 round_trippers.go:580]     Audit-Id: 842678c4-9c3e-4eca-9fdf-5a51ce06fb7b
	I0327 23:47:07.664753    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:07.664753    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:07.665138    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:07.665379    8488 pod_ready.go:92] pod "kube-scheduler-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:07.665379    8488 pod_ready.go:81] duration metric: took 7.6785ms for pod "kube-scheduler-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:07.665379    8488 pod_ready.go:38] duration metric: took 12.0875554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:47:07.665379    8488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 23:47:07.703246    8488 command_runner.go:130] > -16
	I0327 23:47:07.703246    8488 ops.go:34] apiserver oom_adj: -16
	I0327 23:47:07.703246    8488 kubeadm.go:591] duration metric: took 23.2205855s to restartPrimaryControlPlane
	I0327 23:47:07.703246    8488 kubeadm.go:393] duration metric: took 23.2952369s to StartCluster
	I0327 23:47:07.703246    8488 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:47:07.703246    8488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:47:07.705243    8488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 23:47:07.707241    8488 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.236.250 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 23:47:07.707241    8488 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 23:47:07.714260    8488 out.go:177] * Verifying Kubernetes components...
	I0327 23:47:07.707241    8488 addons.go:69] Setting storage-provisioner=true in profile "functional-848700"
	I0327 23:47:07.707241    8488 addons.go:69] Setting default-storageclass=true in profile "functional-848700"
	I0327 23:47:07.707241    8488 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:47:07.714260    8488 addons.go:234] Setting addon storage-provisioner=true in "functional-848700"
	W0327 23:47:07.717311    8488 addons.go:243] addon storage-provisioner should already be in state true
	I0327 23:47:07.714260    8488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-848700"
	I0327 23:47:07.717420    8488 host.go:66] Checking if "functional-848700" exists ...
	I0327 23:47:07.718167    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:47:07.718872    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:47:07.732357    8488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 23:47:08.065051    8488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 23:47:08.102328    8488 node_ready.go:35] waiting up to 6m0s for node "functional-848700" to be "Ready" ...
	I0327 23:47:08.102328    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:08.102328    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.102328    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.102328    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.106978    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:08.106978    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.107222    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.107222    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.107222    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.107222    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.107222    8488 round_trippers.go:580]     Audit-Id: 456332e8-41fc-4dc3-abca-221dbfdac69e
	I0327 23:47:08.107222    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.107531    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:08.108030    8488 node_ready.go:49] node "functional-848700" has status "Ready":"True"
	I0327 23:47:08.108030    8488 node_ready.go:38] duration metric: took 5.7017ms for node "functional-848700" to be "Ready" ...
	I0327 23:47:08.108116    8488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:47:08.108196    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:47:08.108283    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.108283    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.108283    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.112975    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:08.112975    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.114035    8488 round_trippers.go:580]     Audit-Id: 12f22c6e-2d90-47f2-b662-c7b37f7cfad4
	I0327 23:47:08.114035    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.114035    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.114035    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.114035    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.114147    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.115011    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"565","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0327 23:47:08.117595    8488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kl22d" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:08.117781    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/coredns-76f75df574-kl22d
	I0327 23:47:08.117853    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.117853    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.117853    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.121135    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:08.121306    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.121340    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.121369    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.121369    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.121369    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.121369    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.121369    8488 round_trippers.go:580]     Audit-Id: 49efa128-7778-4728-9888-1d2a2dbcd7e5
	I0327 23:47:08.121369    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"565","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6452 chars]
	I0327 23:47:08.220316    8488 request.go:629] Waited for 97.4721ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:08.220316    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:08.220316    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.220316    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.220316    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.225170    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:08.225170    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.225170    8488 round_trippers.go:580]     Audit-Id: 2c38b5f4-7947-4523-975f-69bbb6930485
	I0327 23:47:08.225170    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.225170    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.225170    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.225170    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.225170    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.225170    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:08.226322    8488 pod_ready.go:92] pod "coredns-76f75df574-kl22d" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:08.226397    8488 pod_ready.go:81] duration metric: took 108.6928ms for pod "coredns-76f75df574-kl22d" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:08.226397    8488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:08.411468    8488 request.go:629] Waited for 185.0698ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:08.412073    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/etcd-functional-848700
	I0327 23:47:08.412073    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.412385    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.412385    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.419134    8488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0327 23:47:08.419134    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.419134    8488 round_trippers.go:580]     Audit-Id: f8c01f12-4a46-4619-9496-512be2545b8b
	I0327 23:47:08.419134    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.419134    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.419134    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.419134    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.419134    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.419651    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-848700","namespace":"kube-system","uid":"05354af6-6cc0-48a9-899a-aba82a561744","resourceVersion":"575","creationTimestamp":"2024-03-27T23:44:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.236.250:2379","kubernetes.io/config.hash":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.mirror":"4f92c5f2db0a0990f33d8686b823140a","kubernetes.io/config.seen":"2024-03-27T23:43:57.947500179Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertis
e-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/con [truncated 6384 chars]
	I0327 23:47:08.617531    8488 request.go:629] Waited for 197.1658ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:08.617681    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:08.617897    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.617897    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.617897    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.621506    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:08.621506    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.621506    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.621506    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.621506    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.621506    8488 round_trippers.go:580]     Audit-Id: 23c23b80-4236-4f35-8874-665a02ad9bfb
	I0327 23:47:08.621506    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.621506    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.621506    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:08.622897    8488 pod_ready.go:92] pod "etcd-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:08.622897    8488 pod_ready.go:81] duration metric: took 396.498ms for pod "etcd-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:08.622897    8488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:08.807054    8488 request.go:629] Waited for 183.5859ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-848700
	I0327 23:47:08.807167    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-848700
	I0327 23:47:08.807335    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:08.807335    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:08.807335    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:08.811756    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:08.811821    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:08.811821    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:08.811821    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:08 GMT
	I0327 23:47:08.811821    8488 round_trippers.go:580]     Audit-Id: 5332f248-7a78-46b4-b745-e9b6c7ee9b9a
	I0327 23:47:08.811821    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:08.811821    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:08.811821    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:08.811821    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-848700","namespace":"kube-system","uid":"e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b","resourceVersion":"579","creationTimestamp":"2024-03-27T23:44:07Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.236.250:8441","kubernetes.io/config.hash":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.mirror":"be479247e39e66a2012516faa69219b0","kubernetes.io/config.seen":"2024-03-27T23:43:57.947501680Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.
kubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernet [truncated 7914 chars]
	I0327 23:47:09.013021    8488 request.go:629] Waited for 200.1819ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.013193    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.013193    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:09.013193    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:09.013193    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:09.018919    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:09.018919    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:09.018919    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:09 GMT
	I0327 23:47:09.018919    8488 round_trippers.go:580]     Audit-Id: 90100bd1-7f34-4695-9067-e68ddfd9a388
	I0327 23:47:09.018919    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:09.018919    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:09.019023    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:09.019023    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:09.019299    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:09.019299    8488 pod_ready.go:92] pod "kube-apiserver-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:09.019299    8488 pod_ready.go:81] duration metric: took 396.399ms for pod "kube-apiserver-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:09.019841    8488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:09.219008    8488 request.go:629] Waited for 198.8712ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-848700
	I0327 23:47:09.219008    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-848700
	I0327 23:47:09.219008    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:09.219008    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:09.219008    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:09.224630    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:09.224680    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:09.224680    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:09.224680    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:09.224680    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:09 GMT
	I0327 23:47:09.224680    8488 round_trippers.go:580]     Audit-Id: 6704b817-1f3b-4d4d-b8e0-1fb4045f200b
	I0327 23:47:09.224680    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:09.224680    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:09.224680    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-848700","namespace":"kube-system","uid":"b43977b6-8078-475c-b311-a70a0e45e1e0","resourceVersion":"567","creationTimestamp":"2024-03-27T23:44:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a4ae8c14ba28e2accfb7447b00af64be","kubernetes.io/config.mirror":"a4ae8c14ba28e2accfb7447b00af64be","kubernetes.io/config.seen":"2024-03-27T23:43:57.947502881Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7477 chars]
	I0327 23:47:09.408978    8488 request.go:629] Waited for 183.2108ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.408978    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.408978    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:09.408978    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:09.409316    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:09.412980    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:09.413971    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:09.413971    8488 round_trippers.go:580]     Audit-Id: 39db10e1-c33a-41e0-942a-579f3c242379
	I0327 23:47:09.413971    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:09.413971    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:09.413971    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:09.413971    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:09.413971    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:09 GMT
	I0327 23:47:09.415018    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:09.415545    8488 pod_ready.go:92] pod "kube-controller-manager-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:09.415632    8488 pod_ready.go:81] duration metric: took 395.7887ms for pod "kube-controller-manager-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:09.415632    8488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-njwdc" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:09.615214    8488 request.go:629] Waited for 199.2455ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njwdc
	I0327 23:47:09.615303    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-proxy-njwdc
	I0327 23:47:09.615303    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:09.615303    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:09.615394    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:09.618798    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:09.619660    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:09.619660    8488 round_trippers.go:580]     Audit-Id: 3b5314b4-2f9d-4819-8e15-ffc8d6458036
	I0327 23:47:09.619660    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:09.619660    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:09.619660    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:09.619660    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:09.619660    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:09 GMT
	I0327 23:47:09.620138    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-njwdc","generateName":"kube-proxy-","namespace":"kube-system","uid":"862af240-aef4-4288-818c-2a9a96564cba","resourceVersion":"511","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"206efe0c-bd7c-41b7-b05d-e0c93e79b5d7","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"206efe0c-bd7c-41b7-b05d-e0c93e79b5d7\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6040 chars]
	I0327 23:47:09.819012    8488 request.go:629] Waited for 197.9969ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.819012    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:09.819230    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:09.819230    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:09.819230    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:09.825075    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:09.825353    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:09.825353    8488 round_trippers.go:580]     Audit-Id: e88e26a7-02f2-49ee-8198-78efa1d69ecc
	I0327 23:47:09.825353    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:09.825353    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:09.825353    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:09.825353    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:09.825353    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:09 GMT
	I0327 23:47:09.825894    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:09.826592    8488 pod_ready.go:92] pod "kube-proxy-njwdc" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:09.826680    8488 pod_ready.go:81] duration metric: took 410.9515ms for pod "kube-proxy-njwdc" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:09.826680    8488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:10.007997    8488 request.go:629] Waited for 180.9385ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-848700
	I0327 23:47:10.008176    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-848700
	I0327 23:47:10.008176    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.008176    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.008238    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.013535    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:10.013535    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.013535    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.013535    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.013535    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.013535    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.013535    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.013535    8488 round_trippers.go:580]     Audit-Id: 492e0be6-2649-4c51-a11b-386e98e6a149
	I0327 23:47:10.014073    8488 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-848700","namespace":"kube-system","uid":"e3288622-5a80-4186-aedf-e189cadec8fb","resourceVersion":"571","creationTimestamp":"2024-03-27T23:44:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e65ee72f09147182b6adbe0f82ab216","kubernetes.io/config.mirror":"8e65ee72f09147182b6adbe0f82ab216","kubernetes.io/config.seen":"2024-03-27T23:44:08.711335826Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5207 chars]
	I0327 23:47:10.044739    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:47:10.044739    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:10.044739    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:47:10.044739    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:10.047914    8488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 23:47:10.045436    8488 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:47:10.051319    8488 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:47:10.051319    8488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 23:47:10.051319    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:47:10.052040    8488 kapi.go:59] client config for functional-848700: &rest.Config{Host:"https://172.28.236.250:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-848700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-848700\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 23:47:10.053071    8488 addons.go:234] Setting addon default-storageclass=true in "functional-848700"
	W0327 23:47:10.053071    8488 addons.go:243] addon default-storageclass should already be in state true
	I0327 23:47:10.053246    8488 host.go:66] Checking if "functional-848700" exists ...
	I0327 23:47:10.054614    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:47:10.211709    8488 request.go:629] Waited for 197.2924ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:10.211775    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes/functional-848700
	I0327 23:47:10.211775    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.211775    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.211775    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.216369    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:10.217089    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.217089    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.217089    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.217089    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.217089    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.217089    8488 round_trippers.go:580]     Audit-Id: 48e63a59-91b8-47c3-b6fd-450e2640f31f
	I0327 23:47:10.217227    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.217847    8488 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Up
date","apiVersion":"v1","time":"2024-03-27T23:44:03Z","fieldsType":"Fie [truncated 4795 chars]
	I0327 23:47:10.218575    8488 pod_ready.go:92] pod "kube-scheduler-functional-848700" in "kube-system" namespace has status "Ready":"True"
	I0327 23:47:10.218618    8488 pod_ready.go:81] duration metric: took 391.9362ms for pod "kube-scheduler-functional-848700" in "kube-system" namespace to be "Ready" ...
	I0327 23:47:10.218618    8488 pod_ready.go:38] duration metric: took 2.1104903s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0327 23:47:10.218618    8488 api_server.go:52] waiting for apiserver process to appear ...
	I0327 23:47:10.232258    8488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 23:47:10.262399    8488 command_runner.go:130] > 6556
	I0327 23:47:10.262852    8488 api_server.go:72] duration metric: took 2.5555973s to wait for apiserver process to appear ...
	I0327 23:47:10.262918    8488 api_server.go:88] waiting for apiserver healthz status ...
	I0327 23:47:10.262918    8488 api_server.go:253] Checking apiserver healthz at https://172.28.236.250:8441/healthz ...
	I0327 23:47:10.272976    8488 api_server.go:279] https://172.28.236.250:8441/healthz returned 200:
	ok
	I0327 23:47:10.272976    8488 round_trippers.go:463] GET https://172.28.236.250:8441/version
	I0327 23:47:10.272976    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.272976    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.272976    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.274563    8488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0327 23:47:10.274563    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.274563    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.274765    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.274765    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.274765    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.274765    8488 round_trippers.go:580]     Content-Length: 263
	I0327 23:47:10.274765    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.274765    8488 round_trippers.go:580]     Audit-Id: 01ba9cde-0191-4bb6-b6bc-31b765dea135
	I0327 23:47:10.274824    8488 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0327 23:47:10.274867    8488 api_server.go:141] control plane version: v1.29.3
	I0327 23:47:10.274920    8488 api_server.go:131] duration metric: took 12.0018ms to wait for apiserver health ...
	I0327 23:47:10.274920    8488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0327 23:47:10.414339    8488 request.go:629] Waited for 139.4177ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:47:10.414599    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:47:10.414670    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.414670    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.414670    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.420296    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:10.420296    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.420296    8488 round_trippers.go:580]     Audit-Id: 9ff32bcd-7a22-46a5-9bbb-77876f8bd03e
	I0327 23:47:10.421265    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.421265    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.421265    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.421265    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.421324    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.422497    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"565","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0327 23:47:10.424822    8488 system_pods.go:59] 7 kube-system pods found
	I0327 23:47:10.424822    8488 system_pods.go:61] "coredns-76f75df574-kl22d" [68395922-8215-40eb-ba25-a66d3a484a61] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "etcd-functional-848700" [05354af6-6cc0-48a9-899a-aba82a561744] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "kube-apiserver-functional-848700" [e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "kube-controller-manager-functional-848700" [b43977b6-8078-475c-b311-a70a0e45e1e0] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "kube-proxy-njwdc" [862af240-aef4-4288-818c-2a9a96564cba] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "kube-scheduler-functional-848700" [e3288622-5a80-4186-aedf-e189cadec8fb] Running
	I0327 23:47:10.424822    8488 system_pods.go:61] "storage-provisioner" [b22b2f6c-1e15-4539-9cec-25649ec63e34] Running
	I0327 23:47:10.424822    8488 system_pods.go:74] duration metric: took 149.9008ms to wait for pod list to return data ...
	I0327 23:47:10.424822    8488 default_sa.go:34] waiting for default service account to be created ...
	I0327 23:47:10.620011    8488 request.go:629] Waited for 194.6577ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/default/serviceaccounts
	I0327 23:47:10.620315    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/default/serviceaccounts
	I0327 23:47:10.620470    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.620586    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.620586    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.625766    8488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0327 23:47:10.626328    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.626328    8488 round_trippers.go:580]     Audit-Id: 1e987047-93a6-465f-b6cf-ab4567ab7f6b
	I0327 23:47:10.626328    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.626328    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.626434    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.626467    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.626467    8488 round_trippers.go:580]     Content-Length: 261
	I0327 23:47:10.626510    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.626510    8488 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"84c01aa3-c247-4e81-b0a1-0306825cdda2","resourceVersion":"300","creationTimestamp":"2024-03-27T23:44:19Z"}}]}
	I0327 23:47:10.627178    8488 default_sa.go:45] found service account: "default"
	I0327 23:47:10.627304    8488 default_sa.go:55] duration metric: took 202.4812ms for default service account to be created ...
	I0327 23:47:10.627304    8488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0327 23:47:10.808908    8488 request.go:629] Waited for 181.2793ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:47:10.809134    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/namespaces/kube-system/pods
	I0327 23:47:10.809258    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:10.809308    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:10.809308    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:10.818126    8488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0327 23:47:10.818126    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:10.818126    8488 round_trippers.go:580]     Audit-Id: 8ad94821-4d76-47c7-8fbe-e82b6947263b
	I0327 23:47:10.818629    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:10.818629    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:10.818629    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:10.818629    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:10.818629    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:10 GMT
	I0327 23:47:10.819944    8488 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"coredns-76f75df574-kl22d","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"68395922-8215-40eb-ba25-a66d3a484a61","resourceVersion":"565","creationTimestamp":"2024-03-27T23:44:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"d781c69f-ccf7-46ad-b095-0f53e5da83c8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-27T23:44:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d781c69f-ccf7-46ad-b095-0f53e5da83c8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50141 chars]
	I0327 23:47:10.822680    8488 system_pods.go:86] 7 kube-system pods found
	I0327 23:47:10.822748    8488 system_pods.go:89] "coredns-76f75df574-kl22d" [68395922-8215-40eb-ba25-a66d3a484a61] Running
	I0327 23:47:10.822748    8488 system_pods.go:89] "etcd-functional-848700" [05354af6-6cc0-48a9-899a-aba82a561744] Running
	I0327 23:47:10.822808    8488 system_pods.go:89] "kube-apiserver-functional-848700" [e4bdd59f-da15-4c7c-bf7b-edf829ae8b0b] Running
	I0327 23:47:10.822808    8488 system_pods.go:89] "kube-controller-manager-functional-848700" [b43977b6-8078-475c-b311-a70a0e45e1e0] Running
	I0327 23:47:10.822808    8488 system_pods.go:89] "kube-proxy-njwdc" [862af240-aef4-4288-818c-2a9a96564cba] Running
	I0327 23:47:10.822808    8488 system_pods.go:89] "kube-scheduler-functional-848700" [e3288622-5a80-4186-aedf-e189cadec8fb] Running
	I0327 23:47:10.822868    8488 system_pods.go:89] "storage-provisioner" [b22b2f6c-1e15-4539-9cec-25649ec63e34] Running
	I0327 23:47:10.822868    8488 system_pods.go:126] duration metric: took 195.4848ms to wait for k8s-apps to be running ...
	I0327 23:47:10.822868    8488 system_svc.go:44] waiting for kubelet service to be running ....
	I0327 23:47:10.834820    8488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 23:47:10.865823    8488 system_svc.go:56] duration metric: took 42.8757ms WaitForService to wait for kubelet
	I0327 23:47:10.865896    8488 kubeadm.go:576] duration metric: took 3.1586374s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 23:47:10.865896    8488 node_conditions.go:102] verifying NodePressure condition ...
	I0327 23:47:11.013588    8488 request.go:629] Waited for 147.4553ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.236.250:8441/api/v1/nodes
	I0327 23:47:11.013941    8488 round_trippers.go:463] GET https://172.28.236.250:8441/api/v1/nodes
	I0327 23:47:11.013941    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:11.013941    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:11.013941    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:11.017422    8488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0327 23:47:11.018296    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:11.018296    8488 round_trippers.go:580]     Audit-Id: 17f8b24e-05e1-4e50-a154-a82c6c6daa14
	I0327 23:47:11.018296    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:11.018296    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:11.018296    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:11.018296    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:11.018296    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:11 GMT
	I0327 23:47:11.018631    8488 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"579"},"items":[{"metadata":{"name":"functional-848700","uid":"7cf3af13-2a24-42b6-949f-e852ccdeb5d5","resourceVersion":"497","creationTimestamp":"2024-03-27T23:44:03Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-848700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"functional-848700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_27T23_44_08_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"m
anagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":" [truncated 4848 chars]
	I0327 23:47:11.018754    8488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0327 23:47:11.018754    8488 node_conditions.go:123] node cpu capacity is 2
	I0327 23:47:11.018754    8488 node_conditions.go:105] duration metric: took 152.857ms to run NodePressure ...
	I0327 23:47:11.018754    8488 start.go:240] waiting for startup goroutines ...
	I0327 23:47:12.366932    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:47:12.367639    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:12.367723    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:47:12.402353    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:47:12.402353    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:12.403354    8488 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 23:47:12.403429    8488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 23:47:12.403519    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
	I0327 23:47:14.692609    8488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0327 23:47:14.692609    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:14.692609    8488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
	I0327 23:47:15.138736    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:47:15.138809    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:15.139525    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:47:15.276375    8488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 23:47:16.153398    8488 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0327 23:47:16.153398    8488 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0327 23:47:16.153398    8488 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0327 23:47:16.153398    8488 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0327 23:47:16.153398    8488 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0327 23:47:16.153398    8488 command_runner.go:130] > pod/storage-provisioner configured
	I0327 23:47:17.462035    8488 main.go:141] libmachine: [stdout =====>] : 172.28.236.250
	
	I0327 23:47:17.462072    8488 main.go:141] libmachine: [stderr =====>] : 
	I0327 23:47:17.462704    8488 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
	I0327 23:47:17.605950    8488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 23:47:17.801956    8488 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0327 23:47:17.802636    8488 round_trippers.go:463] GET https://172.28.236.250:8441/apis/storage.k8s.io/v1/storageclasses
	I0327 23:47:17.802720    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:17.802720    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:17.802765    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:17.807290    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:17.807290    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:17.807290    8488 round_trippers.go:580]     Audit-Id: ed588127-e944-45ca-9a0c-fd9e8db5ce9f
	I0327 23:47:17.807290    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:17.807290    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:17.807290    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:17.807290    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:17.807290    8488 round_trippers.go:580]     Content-Length: 1273
	I0327 23:47:17.807290    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:17 GMT
	I0327 23:47:17.807739    8488 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"586"},"items":[{"metadata":{"name":"standard","uid":"c44524ae-0474-450e-838e-68563d78f59b","resourceVersion":"392","creationTimestamp":"2024-03-27T23:44:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-27T23:44:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0327 23:47:17.808609    8488 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c44524ae-0474-450e-838e-68563d78f59b","resourceVersion":"392","creationTimestamp":"2024-03-27T23:44:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-27T23:44:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0327 23:47:17.808719    8488 round_trippers.go:463] PUT https://172.28.236.250:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0327 23:47:17.808719    8488 round_trippers.go:469] Request Headers:
	I0327 23:47:17.808772    8488 round_trippers.go:473]     Accept: application/json, */*
	I0327 23:47:17.808772    8488 round_trippers.go:473]     Content-Type: application/json
	I0327 23:47:17.808772    8488 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0327 23:47:17.813133    8488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0327 23:47:17.813133    8488 round_trippers.go:577] Response Headers:
	I0327 23:47:17.813133    8488 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6da92fab-2b78-421e-a606-c6bce8c08812
	I0327 23:47:17.813463    8488 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7d975901-91e1-451b-9e7d-51fab94bcf42
	I0327 23:47:17.813463    8488 round_trippers.go:580]     Content-Length: 1220
	I0327 23:47:17.813463    8488 round_trippers.go:580]     Date: Wed, 27 Mar 2024 23:47:17 GMT
	I0327 23:47:17.813463    8488 round_trippers.go:580]     Audit-Id: 25850457-db59-4f8f-b2b4-8d04e10fe384
	I0327 23:47:17.813463    8488 round_trippers.go:580]     Cache-Control: no-cache, private
	I0327 23:47:17.813463    8488 round_trippers.go:580]     Content-Type: application/json
	I0327 23:47:17.813614    8488 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c44524ae-0474-450e-838e-68563d78f59b","resourceVersion":"392","creationTimestamp":"2024-03-27T23:44:30Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-27T23:44:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0327 23:47:17.819543    8488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0327 23:47:17.821726    8488 addons.go:505] duration metric: took 10.1144287s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0327 23:47:17.821726    8488 start.go:245] waiting for cluster config update ...
	I0327 23:47:17.821726    8488 start.go:254] writing updated cluster config ...
	I0327 23:47:17.837387    8488 ssh_runner.go:195] Run: rm -f paused
	I0327 23:47:17.991971    8488 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0327 23:47:17.999232    8488 out.go:177] * Done! kubectl is now configured to use "functional-848700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 27 23:46:53 functional-848700 dockerd[5675]: time="2024-03-27T23:46:53.955207921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:53 functional-848700 dockerd[5675]: time="2024-03-27T23:46:53.955356752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.009733486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.010290193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.010468227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.010850800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.035448319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.035877101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.036066537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.036385198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 cri-dockerd[5910]: time="2024-03-27T23:46:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/741be24a0c8eed0c8f185f73111ff2d496e4de2d59942dd2d2c9a578e9dd240b/resolv.conf as [nameserver 172.28.224.1]"
	Mar 27 23:46:54 functional-848700 cri-dockerd[5910]: time="2024-03-27T23:46:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34ad14a528c313d131eea252a030cf7b4d7ed4355cdf15ec381a1a6759061c8e/resolv.conf as [nameserver 172.28.224.1]"
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.410055574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.410214805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.410527165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.411060567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 cri-dockerd[5910]: time="2024-03-27T23:46:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/23055714a7e1427c5a4ff8491679c48d69b227c5cd977ea15375cda98e0a03da/resolv.conf as [nameserver 172.28.224.1]"
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.709864146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.713551579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.713734215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:54 functional-848700 dockerd[5675]: time="2024-03-27T23:46:54.714087986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:55 functional-848700 dockerd[5675]: time="2024-03-27T23:46:55.024809912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 27 23:46:55 functional-848700 dockerd[5675]: time="2024-03-27T23:46:55.025384342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 27 23:46:55 functional-848700 dockerd[5675]: time="2024-03-27T23:46:55.027548454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:46:55 functional-848700 dockerd[5675]: time="2024-03-27T23:46:55.028811820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6000e2985c29d       cbb01a7bd410d       2 minutes ago       Running             coredns                   1                   23055714a7e14       coredns-76f75df574-kl22d
	1e555f345ff6a       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   34ad14a528c31       storage-provisioner
	39ae878f4598a       a1d263b5dc5b0       2 minutes ago       Running             kube-proxy                1                   741be24a0c8ee       kube-proxy-njwdc
	b1bb95b0c2efe       8c390d98f50c0       2 minutes ago       Running             kube-scheduler            1                   0ef325268bc97       kube-scheduler-functional-848700
	6cd3a996f81f6       6052a25da3f97       2 minutes ago       Running             kube-controller-manager   1                   3cec88576376f       kube-controller-manager-functional-848700
	506865e1abaf3       3861cfcd7c04c       2 minutes ago       Running             etcd                      1                   4a568b18c97c6       etcd-functional-848700
	2f6c25a326a26       39f995c9f1996       2 minutes ago       Running             kube-apiserver            1                   2a031b51b5e9b       kube-apiserver-functional-848700
	8446d864143cc       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       0                   7207097dd1ab0       storage-provisioner
	f46b0aee95185       cbb01a7bd410d       4 minutes ago       Exited              coredns                   0                   9e8bea3bcc4ea       coredns-76f75df574-kl22d
	0e76b3ceb2855       a1d263b5dc5b0       4 minutes ago       Exited              kube-proxy                0                   9000ea8c0bbc5       kube-proxy-njwdc
	42ef5b0003c2a       6052a25da3f97       5 minutes ago       Exited              kube-controller-manager   0                   9ae65303a3e04       kube-controller-manager-functional-848700
	5f04c49c6fd31       3861cfcd7c04c       5 minutes ago       Exited              etcd                      0                   8aad4eaab9c50       etcd-functional-848700
	69f1635a58fd4       39f995c9f1996       5 minutes ago       Exited              kube-apiserver            0                   07f0c4c151730       kube-apiserver-functional-848700
	8500b7ce7c194       8c390d98f50c0       5 minutes ago       Exited              kube-scheduler            0                   bcd5d1eb7fb60       kube-scheduler-functional-848700
	
	
	==> coredns [6000e2985c29] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54978 - 51072 "HINFO IN 7779448319506104792.195530824692068798. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.144083292s
	
	
	==> coredns [f46b0aee9518] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35952 - 43523 "HINFO IN 4198905852182052840.389351811873958910. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.059846872s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[285066353]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 23:44:22.122) (total time: 30001ms):
	Trace[285066353]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:44:52.123)
	Trace[285066353]: [30.00126386s] [30.00126386s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[301150791]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 23:44:22.126) (total time: 30001ms):
	Trace[301150791]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:44:52.127)
	Trace[301150791]: [30.00132075s] [30.00132075s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1530573565]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (27-Mar-2024 23:44:22.123) (total time: 30004ms):
	Trace[1530573565]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (23:44:52.127)
	Trace[1530573565]: [30.004338382s] [30.004338382s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-848700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-848700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=functional-848700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T23_44_08_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:44:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-848700
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:49:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:48:25 +0000   Wed, 27 Mar 2024 23:44:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:48:25 +0000   Wed, 27 Mar 2024 23:44:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:48:25 +0000   Wed, 27 Mar 2024 23:44:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:48:25 +0000   Wed, 27 Mar 2024 23:44:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.236.250
	  Hostname:    functional-848700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce66d41641d44301970c38e720f2e055
	  System UUID:                5579dedf-5100-684c-a671-f77107c64448
	  Boot ID:                    7dcc9223-8d21-403c-a8a4-70dc4f9b7aaf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kl22d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m50s
	  kube-system                 etcd-functional-848700                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-apiserver-functional-848700             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-controller-manager-functional-848700    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-njwdc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-functional-848700             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m48s                  kube-proxy       
	  Normal  Starting                 2m15s                  kube-proxy       
	  Normal  Starting                 5m2s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m1s                   kubelet          Node functional-848700 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m1s                   kubelet          Node functional-848700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m1s                   kubelet          Node functional-848700 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m57s                  kubelet          Node functional-848700 status is now: NodeReady
	  Normal  RegisteredNode           4m51s                  node-controller  Node functional-848700 event: Registered Node functional-848700 in Controller
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node functional-848700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node functional-848700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node functional-848700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m4s                   node-controller  Node functional-848700 event: Registered Node functional-848700 in Controller
	
	
	==> dmesg <==
	[  +0.135048] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.482672] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.792019] systemd-fstab-generator[1534]: Ignoring "noauto" option for root device
	[  +7.992738] systemd-fstab-generator[1818]: Ignoring "noauto" option for root device
	[  +0.125436] kauditd_printk_skb: 51 callbacks suppressed
	[Mar27 23:44] systemd-fstab-generator[2953]: Ignoring "noauto" option for root device
	[  +0.170809] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.574356] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.240811] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.532909] kauditd_printk_skb: 63 callbacks suppressed
	[Mar27 23:46] systemd-fstab-generator[5187]: Ignoring "noauto" option for root device
	[  +0.706041] systemd-fstab-generator[5223]: Ignoring "noauto" option for root device
	[  +0.289124] systemd-fstab-generator[5235]: Ignoring "noauto" option for root device
	[  +0.366994] systemd-fstab-generator[5249]: Ignoring "noauto" option for root device
	[  +5.377324] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.064100] systemd-fstab-generator[5858]: Ignoring "noauto" option for root device
	[  +0.242608] systemd-fstab-generator[5870]: Ignoring "noauto" option for root device
	[  +0.238410] systemd-fstab-generator[5882]: Ignoring "noauto" option for root device
	[  +0.313294] systemd-fstab-generator[5898]: Ignoring "noauto" option for root device
	[  +1.018430] systemd-fstab-generator[6053]: Ignoring "noauto" option for root device
	[  +3.850189] systemd-fstab-generator[6169]: Ignoring "noauto" option for root device
	[  +0.131393] kauditd_printk_skb: 140 callbacks suppressed
	[  +7.030561] kauditd_printk_skb: 52 callbacks suppressed
	[Mar27 23:47] kauditd_printk_skb: 27 callbacks suppressed
	[  +1.330821] systemd-fstab-generator[7295]: Ignoring "noauto" option for root device
	
	
	==> etcd [506865e1abaf] <==
	{"level":"info","ts":"2024-03-27T23:46:49.548499Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-27T23:46:49.548811Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-27T23:46:49.544885Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 switched to configuration voters=(8469823227328400566)"}
	{"level":"info","ts":"2024-03-27T23:46:49.550733Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ce9533bf2a42081e","local-member-id":"758adb59a7430cb6","added-peer-id":"758adb59a7430cb6","added-peer-peer-urls":["https://172.28.236.250:2380"]}
	{"level":"info","ts":"2024-03-27T23:46:49.551522Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ce9533bf2a42081e","local-member-id":"758adb59a7430cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:46:49.553662Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:46:49.591961Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T23:46:49.59202Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.236.250:2380"}
	{"level":"info","ts":"2024-03-27T23:46:49.592993Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.236.250:2380"}
	{"level":"info","ts":"2024-03-27T23:46:49.601833Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T23:46:49.601775Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"758adb59a7430cb6","initial-advertise-peer-urls":["https://172.28.236.250:2380"],"listen-peer-urls":["https://172.28.236.250:2380"],"advertise-client-urls":["https://172.28.236.250:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.236.250:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T23:46:50.839617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-27T23:46:50.839913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-27T23:46:50.840222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 received MsgPreVoteResp from 758adb59a7430cb6 at term 2"}
	{"level":"info","ts":"2024-03-27T23:46:50.840423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 became candidate at term 3"}
	{"level":"info","ts":"2024-03-27T23:46:50.840591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 received MsgVoteResp from 758adb59a7430cb6 at term 3"}
	{"level":"info","ts":"2024-03-27T23:46:50.840774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"758adb59a7430cb6 became leader at term 3"}
	{"level":"info","ts":"2024-03-27T23:46:50.840937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 758adb59a7430cb6 elected leader 758adb59a7430cb6 at term 3"}
	{"level":"info","ts":"2024-03-27T23:46:50.856933Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"758adb59a7430cb6","local-member-attributes":"{Name:functional-848700 ClientURLs:[https://172.28.236.250:2379]}","request-path":"/0/members/758adb59a7430cb6/attributes","cluster-id":"ce9533bf2a42081e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T23:46:50.857488Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:46:50.858116Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:46:50.867283Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T23:46:50.889625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T23:46:50.889975Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T23:46:50.897312Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.236.250:2379"}
	
	
	==> etcd [5f04c49c6fd3] <==
	{"level":"info","ts":"2024-03-27T23:44:00.859133Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:44:00.868095Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"758adb59a7430cb6","local-member-attributes":"{Name:functional-848700 ClientURLs:[https://172.28.236.250:2379]}","request-path":"/0/members/758adb59a7430cb6/attributes","cluster-id":"ce9533bf2a42081e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T23:44:00.86845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:44:00.868632Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:44:00.874754Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T23:44:00.874797Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T23:44:00.884993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T23:44:00.906457Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ce9533bf2a42081e","local-member-id":"758adb59a7430cb6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:44:00.909533Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:44:00.918925Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:44:00.91467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.236.250:2379"}
	{"level":"warn","ts":"2024-03-27T23:44:07.373652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.487096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-848700\" ","response":"range_response_count:1 size:3530"}
	{"level":"info","ts":"2024-03-27T23:44:07.374528Z","caller":"traceutil/trace.go:171","msg":"trace[490756289] range","detail":"{range_begin:/registry/minions/functional-848700; range_end:; response_count:1; response_revision:217; }","duration":"132.400006ms","start":"2024-03-27T23:44:07.242112Z","end":"2024-03-27T23:44:07.374512Z","steps":["trace[490756289] 'range keys from in-memory index tree'  (duration: 131.314656ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-27T23:44:08.027601Z","caller":"traceutil/trace.go:171","msg":"trace[187029006] transaction","detail":"{read_only:false; response_revision:218; number_of_response:1; }","duration":"643.517865ms","start":"2024-03-27T23:44:07.384064Z","end":"2024-03-27T23:44:08.027582Z","steps":["trace[187029006] 'process raft request'  (duration: 643.418344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-27T23:44:08.028256Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-27T23:44:07.384043Z","time spent":"643.616087ms","remote":"127.0.0.1:58374","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3744,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/functional-848700\" mod_revision:216 > success:<request_put:<key:\"/registry/minions/functional-848700\" value_size:3701 >> failure:<request_range:<key:\"/registry/minions/functional-848700\" > >"}
	{"level":"info","ts":"2024-03-27T23:46:28.163455Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-27T23:46:28.163588Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-848700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.236.250:2380"],"advertise-client-urls":["https://172.28.236.250:2379"]}
	{"level":"warn","ts":"2024-03-27T23:46:28.163673Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-27T23:46:28.163842Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-27T23:46:28.258213Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.28.236.250:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-27T23:46:28.258258Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.28.236.250:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-27T23:46:28.258355Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"758adb59a7430cb6","current-leader-member-id":"758adb59a7430cb6"}
	{"level":"info","ts":"2024-03-27T23:46:28.266491Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"172.28.236.250:2380"}
	{"level":"info","ts":"2024-03-27T23:46:28.266646Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"172.28.236.250:2380"}
	{"level":"info","ts":"2024-03-27T23:46:28.266677Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-848700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.28.236.250:2380"],"advertise-client-urls":["https://172.28.236.250:2379"]}
	
	
	==> kernel <==
	 23:49:10 up 7 min,  0 users,  load average: 0.30, 0.47, 0.25
	Linux functional-848700 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2f6c25a326a2] <==
	I0327 23:46:53.024736       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0327 23:46:53.024747       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0327 23:46:53.024759       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0327 23:46:53.132025       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0327 23:46:53.142143       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0327 23:46:53.142180       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0327 23:46:53.142266       1 shared_informer.go:318] Caches are synced for configmaps
	I0327 23:46:53.142321       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0327 23:46:53.144801       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 23:46:53.145839       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0327 23:46:53.147490       1 aggregator.go:165] initial CRD sync complete...
	I0327 23:46:53.147674       1 autoregister_controller.go:141] Starting autoregister controller
	I0327 23:46:53.147682       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0327 23:46:53.147688       1 cache.go:39] Caches are synced for autoregister controller
	I0327 23:46:53.164849       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 23:46:53.186748       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0327 23:46:53.192988       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0327 23:46:54.039389       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 23:46:55.375175       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0327 23:46:55.412221       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0327 23:46:55.486660       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0327 23:46:55.542323       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 23:46:55.557857       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 23:47:06.448415       1 controller.go:624] quota admission added evaluator for: endpoints
	I0327 23:47:06.494631       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [69f1635a58fd] <==
	W0327 23:46:37.218298       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.222037       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.288461       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.300063       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.302899       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.308983       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.361070       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.402113       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.404882       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.420114       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.425445       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.454201       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.515491       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.521676       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.591024       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.634279       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.716973       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.759122       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.762317       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.823362       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:37.875894       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:38.028918       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:38.052307       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:38.097489       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0327 23:46:38.171262       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [42ef5b0003c2] <==
	I0327 23:44:19.744937       1 shared_informer.go:318] Caches are synced for cronjob
	I0327 23:44:19.757934       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 23:44:19.774543       1 shared_informer.go:318] Caches are synced for job
	I0327 23:44:20.102864       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 23:44:20.102896       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0327 23:44:20.194103       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 23:44:20.553307       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0327 23:44:20.642944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-njwdc"
	I0327 23:44:20.726096       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-tfkwp"
	I0327 23:44:20.774638       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-kl22d"
	I0327 23:44:20.806396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="254.081679ms"
	I0327 23:44:20.842862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="36.332558ms"
	I0327 23:44:20.907880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="64.936579ms"
	I0327 23:44:20.908904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="779.874µs"
	I0327 23:44:20.918532       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0327 23:44:20.963507       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-tfkwp"
	I0327 23:44:21.020928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="99.533541ms"
	I0327 23:44:21.102969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.803459ms"
	I0327 23:44:21.103865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="58.905µs"
	I0327 23:44:22.609052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.806µs"
	I0327 23:44:22.625486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="100.408µs"
	I0327 23:44:22.636296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="60.505µs"
	I0327 23:44:22.655808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="255.72µs"
	I0327 23:45:01.154570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="21.305341ms"
	I0327 23:45:01.155228       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="281.311µs"
	
	
	==> kube-controller-manager [6cd3a996f81f] <==
	I0327 23:47:06.202793       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0327 23:47:06.205732       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0327 23:47:06.207613       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0327 23:47:06.210702       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0327 23:47:06.211055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="90.904µs"
	I0327 23:47:06.214625       1 shared_informer.go:318] Caches are synced for crt configmap
	I0327 23:47:06.214994       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0327 23:47:06.216996       1 shared_informer.go:318] Caches are synced for GC
	I0327 23:47:06.221362       1 shared_informer.go:318] Caches are synced for stateful set
	I0327 23:47:06.222862       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0327 23:47:06.226423       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0327 23:47:06.230956       1 shared_informer.go:318] Caches are synced for taint
	I0327 23:47:06.231295       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0327 23:47:06.231693       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-848700"
	I0327 23:47:06.231928       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0327 23:47:06.232063       1 event.go:376] "Event occurred" object="functional-848700" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-848700 event: Registered Node functional-848700 in Controller"
	I0327 23:47:06.242171       1 shared_informer.go:318] Caches are synced for attach detach
	I0327 23:47:06.290955       1 shared_informer.go:318] Caches are synced for endpoint
	I0327 23:47:06.297623       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 23:47:06.300283       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0327 23:47:06.344471       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0327 23:47:06.391928       1 shared_informer.go:318] Caches are synced for resource quota
	I0327 23:47:06.744427       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 23:47:06.789821       1 shared_informer.go:318] Caches are synced for garbage collector
	I0327 23:47:06.789863       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-proxy [0e76b3ceb285] <==
	I0327 23:44:21.967122       1 server_others.go:72] "Using iptables proxy"
	I0327 23:44:22.013533       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.236.250"]
	I0327 23:44:22.104645       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 23:44:22.105002       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 23:44:22.105198       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:44:22.110785       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:44:22.111213       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:44:22.111820       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:44:22.113620       1 config.go:188] "Starting service config controller"
	I0327 23:44:22.113917       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:44:22.114162       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:44:22.114465       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:44:22.118917       1 config.go:315] "Starting node config controller"
	I0327 23:44:22.119168       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:44:22.214834       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:44:22.214889       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0327 23:44:22.219608       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [39ae878f4598] <==
	I0327 23:46:54.772048       1 server_others.go:72] "Using iptables proxy"
	I0327 23:46:54.817461       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.236.250"]
	I0327 23:46:54.881233       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0327 23:46:54.881303       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0327 23:46:54.881328       1 server_others.go:168] "Using iptables Proxier"
	I0327 23:46:54.888103       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0327 23:46:54.888868       1 server.go:865] "Version info" version="v1.29.3"
	I0327 23:46:54.889106       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:46:54.891897       1 config.go:315] "Starting node config controller"
	I0327 23:46:54.892812       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0327 23:46:54.892469       1 config.go:188] "Starting service config controller"
	I0327 23:46:54.892905       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0327 23:46:54.892481       1 config.go:97] "Starting endpoint slice config controller"
	I0327 23:46:54.892920       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0327 23:46:54.993001       1 shared_informer.go:318] Caches are synced for node config
	I0327 23:46:54.993065       1 shared_informer.go:318] Caches are synced for service config
	I0327 23:46:54.993161       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8500b7ce7c19] <==
	W0327 23:44:04.459013       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 23:44:04.459473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 23:44:04.492423       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 23:44:04.492675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 23:44:04.550010       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0327 23:44:04.550324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0327 23:44:04.550673       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0327 23:44:04.550799       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0327 23:44:04.595915       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:44:04.596384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:44:04.749368       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0327 23:44:04.749839       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0327 23:44:04.756028       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 23:44:04.756149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 23:44:04.890160       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:44:04.890303       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:44:04.919837       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:44:04.919890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:44:04.940494       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0327 23:44:04.940607       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0327 23:44:07.245367       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 23:46:28.100439       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0327 23:46:28.100522       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0327 23:46:28.100940       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0327 23:46:28.101140       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b1bb95b0c2ef] <==
	I0327 23:46:51.343003       1 serving.go:380] Generated self-signed cert in-memory
	W0327 23:46:53.036138       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0327 23:46:53.036418       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 23:46:53.036870       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0327 23:46:53.037075       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0327 23:46:53.116193       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0327 23:46:53.116647       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:46:53.130201       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0327 23:46:53.130335       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0327 23:46:53.130353       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0327 23:46:53.130372       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0327 23:46:53.231307       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.184862    6176 kubelet_node_status.go:76] "Successfully registered node" node="functional-848700"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.190185    6176 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.191869    6176 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.276234    6176 apiserver.go:52] "Watching apiserver"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.280857    6176 topology_manager.go:215] "Topology Admit Handler" podUID="862af240-aef4-4288-818c-2a9a96564cba" podNamespace="kube-system" podName="kube-proxy-njwdc"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.280991    6176 topology_manager.go:215] "Topology Admit Handler" podUID="68395922-8215-40eb-ba25-a66d3a484a61" podNamespace="kube-system" podName="coredns-76f75df574-kl22d"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.283497    6176 topology_manager.go:215] "Topology Admit Handler" podUID="b22b2f6c-1e15-4539-9cec-25649ec63e34" podNamespace="kube-system" podName="storage-provisioner"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.308063    6176 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.360647    6176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/862af240-aef4-4288-818c-2a9a96564cba-lib-modules\") pod \"kube-proxy-njwdc\" (UID: \"862af240-aef4-4288-818c-2a9a96564cba\") " pod="kube-system/kube-proxy-njwdc"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.360738    6176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/862af240-aef4-4288-818c-2a9a96564cba-xtables-lock\") pod \"kube-proxy-njwdc\" (UID: \"862af240-aef4-4288-818c-2a9a96564cba\") " pod="kube-system/kube-proxy-njwdc"
	Mar 27 23:46:53 functional-848700 kubelet[6176]: I0327 23:46:53.360765    6176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b22b2f6c-1e15-4539-9cec-25649ec63e34-tmp\") pod \"storage-provisioner\" (UID: \"b22b2f6c-1e15-4539-9cec-25649ec63e34\") " pod="kube-system/storage-provisioner"
	Mar 27 23:46:54 functional-848700 kubelet[6176]: I0327 23:46:54.304300    6176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ad14a528c313d131eea252a030cf7b4d7ed4355cdf15ec381a1a6759061c8e"
	Mar 27 23:46:54 functional-848700 kubelet[6176]: I0327 23:46:54.494208    6176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23055714a7e1427c5a4ff8491679c48d69b227c5cd977ea15375cda98e0a03da"
	Mar 27 23:46:56 functional-848700 kubelet[6176]: I0327 23:46:56.796773    6176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 27 23:46:59 functional-848700 kubelet[6176]: I0327 23:46:59.159071    6176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 27 23:47:47 functional-848700 kubelet[6176]: E0327 23:47:47.396908    6176 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:47:47 functional-848700 kubelet[6176]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:47:47 functional-848700 kubelet[6176]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:47:47 functional-848700 kubelet[6176]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:47:47 functional-848700 kubelet[6176]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 27 23:48:47 functional-848700 kubelet[6176]: E0327 23:48:47.394000    6176 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 27 23:48:47 functional-848700 kubelet[6176]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 27 23:48:47 functional-848700 kubelet[6176]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 27 23:48:47 functional-848700 kubelet[6176]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 27 23:48:47 functional-848700 kubelet[6176]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [1e555f345ff6] <==
	I0327 23:46:54.854988       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:46:54.869598       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:46:54.869638       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:47:12.310246       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:47:12.310418       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebf5322c-7691-47f6-b240-aadccb96923d", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-848700_1e80ad31-32dc-4fee-adba-9b7ee97fc815 became leader
	I0327 23:47:12.311257       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-848700_1e80ad31-32dc-4fee-adba-9b7ee97fc815!
	I0327 23:47:12.411833       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-848700_1e80ad31-32dc-4fee-adba-9b7ee97fc815!
	
	
	==> storage-provisioner [8446d864143c] <==
	I0327 23:44:29.221176       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:44:29.237198       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:44:29.237503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:44:29.258066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:44:29.259061       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-848700_e5f66fad-eefe-4ccc-a520-2dcc64e0e15c!
	I0327 23:44:29.261255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebf5322c-7691-47f6-b240-aadccb96923d", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-848700_e5f66fad-eefe-4ccc-a520-2dcc64e0e15c became leader
	I0327 23:44:29.360227       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-848700_e5f66fad-eefe-4ccc-a520-2dcc64e0e15c!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:49:02.256942    6860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-848700 -n functional-848700
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-848700 -n functional-848700: (12.9557744s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-848700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (36.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config unset cpus" to be -""- but got *"W0327 23:52:21.076890   13128 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 config get cpus: exit status 14 (276.2943ms)

                                                
                                                
** stderr ** 
	W0327 23:52:21.420651    5736 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0327 23:52:21.420651    5736 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0327 23:52:21.700555    9328 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config get cpus" to be -""- but got *"W0327 23:52:22.014913    9360 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config unset cpus" to be -""- but got *"W0327 23:52:22.304884   12724 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 config get cpus: exit status 14 (278.4088ms)

                                                
                                                
** stderr ** 
	W0327 23:52:22.610931    4760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-848700 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0327 23:52:22.610931    4760 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 service --namespace=default --https --url hello-node: exit status 1 (15.0350539s)

                                                
                                                
** stderr ** 
	W0327 23:54:16.827594   13548 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-848700 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url --format={{.IP}}: exit status 1 (15.0486927s)

                                                
                                                
** stderr ** 
	W0327 23:54:31.879718    9076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url: exit status 1 (15.0255509s)

                                                
                                                
** stderr ** 
	W0327 23:54:46.949907    3128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-848700 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (73.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.6021159s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:16:09.723137    6512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-7fdf7869d9-jw6s4): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5402048s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:16:20.869753   14148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-7fdf7869d9-lb47v): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- sh -c "ping -c 1 172.28.224.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5681885s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:16:32.006309   12760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.224.1) from pod (busybox-7fdf7869d9-shnp5): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-170000 -n ha-170000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-170000 -n ha-170000: (13.265195s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 logs -n 25: (9.7508519s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	| update-context | functional-848700                    | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	|                | update-context                       |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2               |                   |                   |                |                     |                     |
	| update-context | functional-848700                    | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	|                | update-context                       |                   |                   |                |                     |                     |
	|                | --alsologtostderr -v=2               |                   |                   |                |                     |                     |
	| image          | functional-848700 image ls           | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:56 UTC | 27 Mar 24 23:56 UTC |
	| delete         | -p functional-848700                 | functional-848700 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:00 UTC | 28 Mar 24 00:01 UTC |
	| start          | -p ha-170000 --wait=true             | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:01 UTC | 28 Mar 24 00:15 UTC |
	|                | --memory=2200 --ha                   |                   |                   |                |                     |                     |
	|                | -v=7 --alsologtostderr               |                   |                   |                |                     |                     |
	|                | --driver=hyperv                      |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- apply -f             | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:15 UTC | 28 Mar 24 00:15 UTC |
	|                | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- rollout status       | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:15 UTC | 28 Mar 24 00:16 UTC |
	|                | deployment/busybox                   |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- get pods -o          | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | jsonpath='{.items[*].status.podIP}'  |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- get pods -o          | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-jw6s4 --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-lb47v --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-shnp5 --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.io               |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-jw6s4 --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-lb47v --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-shnp5 --          |                   |                   |                |                     |                     |
	|                | nslookup kubernetes.default          |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-jw6s4 -- nslookup |                   |                   |                |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-lb47v -- nslookup |                   |                   |                |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-shnp5 -- nslookup |                   |                   |                |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- get pods -o          | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-jw6s4             |                   |                   |                |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |                |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |                |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC |                     |
	|                | busybox-7fdf7869d9-jw6s4 -- sh       |                   |                   |                |                     |                     |
	|                | -c ping -c 1 172.28.224.1            |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-lb47v             |                   |                   |                |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |                |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |                |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC |                     |
	|                | busybox-7fdf7869d9-lb47v -- sh       |                   |                   |                |                     |                     |
	|                | -c ping -c 1 172.28.224.1            |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC | 28 Mar 24 00:16 UTC |
	|                | busybox-7fdf7869d9-shnp5             |                   |                   |                |                     |                     |
	|                | -- sh -c nslookup                    |                   |                   |                |                     |                     |
	|                | host.minikube.internal | awk         |                   |                   |                |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |                   |                |                     |                     |
	| kubectl        | -p ha-170000 -- exec                 | ha-170000         | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:16 UTC |                     |
	|                | busybox-7fdf7869d9-shnp5 -- sh       |                   |                   |                |                     |                     |
	|                | -c ping -c 1 172.28.224.1            |                   |                   |                |                     |                     |
	|----------------|--------------------------------------|-------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:01:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:01:24.451554   13512 out.go:291] Setting OutFile to fd 796 ...
	I0328 00:01:24.451554   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:24.451554   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:01:24.451554   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:24.476415   13512 out.go:298] Setting JSON to false
	I0328 00:01:24.479993   13512 start.go:129] hostinfo: {"hostname":"minikube6","uptime":6745,"bootTime":1711577338,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0328 00:01:24.479993   13512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 00:01:24.486130   13512 out.go:177] * [ha-170000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0328 00:01:24.490137   13512 notify.go:220] Checking for updates...
	I0328 00:01:24.492684   13512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:01:24.494987   13512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:01:24.498052   13512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0328 00:01:24.500847   13512 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:01:24.503280   13512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:01:24.507157   13512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:01:30.168764   13512 out.go:177] * Using the hyperv driver based on user configuration
	I0328 00:01:30.172808   13512 start.go:297] selected driver: hyperv
	I0328 00:01:30.172808   13512 start.go:901] validating driver "hyperv" against <nil>
	I0328 00:01:30.172808   13512 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:01:30.230162   13512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:01:30.231286   13512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:01:30.231286   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:01:30.231286   13512 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0328 00:01:30.231286   13512 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 00:01:30.232013   13512 start.go:340] cluster config:
	{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:01:30.232013   13512 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:01:30.237048   13512 out.go:177] * Starting "ha-170000" primary control-plane node in "ha-170000" cluster
	I0328 00:01:30.239101   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:01:30.239566   13512 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0328 00:01:30.239566   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:01:30.239720   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:01:30.240052   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:01:30.240286   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:01:30.240286   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json: {Name:mk71d93613833e4ee8cfd8afcb08bb23d0afb004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:01:30.241604   13512 start.go:360] acquireMachinesLock for ha-170000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:01:30.241604   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000"
	I0328 00:01:30.242320   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:01:30.242320   13512 start.go:125] createHost starting for "" (driver="hyperv")
	I0328 00:01:30.245886   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:01:30.246388   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:01:30.246388   13512 client.go:168] LocalClient.Create starting
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:01:30.247487   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:01:30.247487   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:01:30.247893   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:01:30.248034   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:01:32.464702   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:01:32.465750   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:32.465750   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:01:39.812126   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:01:39.812126   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:39.814601   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:01:40.325866   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:01:40.463413   13512 main.go:141] libmachine: Creating VM...
	I0328 00:01:40.463413   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:01:43.536968   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:01:43.536968   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:43.537235   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:01:43.537337   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:01:45.429619   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:01:45.429878   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:45.429878   13512 main.go:141] libmachine: Creating VHD
	I0328 00:01:45.429878   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:01:49.400603   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 588AB536-AE95-4F4C-9215-F82B93ECAE3A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:01:49.400603   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:49.400723   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:01:49.400723   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:01:49.410089   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:01:52.737577   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:01:52.737577   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:52.737841   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd' -SizeBytes 20000MB
	I0328 00:01:55.375105   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:01:55.375976   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:55.376062   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:01:59.276195   13512 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-170000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:01:59.276981   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:59.276981   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000 -DynamicMemoryEnabled $false
	I0328 00:02:01.686956   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:01.686956   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:01.687149   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000 -Count 2
	I0328 00:02:03.998021   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:03.998507   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:03.998638   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\boot2docker.iso'
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd'
	I0328 00:02:09.631283   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:09.631283   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:09.631413   13512 main.go:141] libmachine: Starting VM...
	I0328 00:02:09.631413   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000
	I0328 00:02:12.856754   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:12.856754   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:12.856980   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:02:12.857162   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:15.221077   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:15.221923   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:15.221986   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:17.911929   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:17.912102   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:18.926016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:21.244218   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:21.244471   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:21.244557   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:23.870634   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:23.870634   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:24.885016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:29.801559   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:29.801559   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:30.809119   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:33.097395   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:33.097395   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:33.097680   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:35.725485   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:35.725485   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:36.737437   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:39.064704   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:39.064704   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:39.065011   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:41.754244   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:41.754244   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:41.755224   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:44.002408   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:44.002408   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:44.002408   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:02:44.002606   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:46.285687   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:46.286015   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:46.286306   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:49.037569   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:49.038358   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:49.044326   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:49.057418   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:49.057418   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:02:49.200615   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:02:49.200615   13512 buildroot.go:166] provisioning hostname "ha-170000"
	I0328 00:02:49.200615   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:51.508278   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:51.508946   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:51.508946   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:54.197196   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:54.197196   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:54.203066   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:54.203830   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:54.203830   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000 && echo "ha-170000" | sudo tee /etc/hostname
	I0328 00:02:54.378371   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000
	
	I0328 00:02:54.378574   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:56.643928   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:56.643928   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:56.644651   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:59.331865   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:59.331865   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:59.338512   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:59.338712   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:59.338712   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:02:59.488578   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:02:59.488711   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:02:59.488711   13512 buildroot.go:174] setting up certificates
	I0328 00:02:59.488711   13512 provision.go:84] configureAuth start
	I0328 00:02:59.488711   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:01.767341   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:01.767341   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:01.768381   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:04.450579   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:04.450579   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:04.451559   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:06.727559   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:06.728083   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:06.728196   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:09.520843   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:09.521424   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:09.521424   13512 provision.go:143] copyHostCerts
	I0328 00:03:09.521659   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:03:09.522138   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:03:09.522138   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:03:09.522429   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:03:09.523970   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:03:09.524237   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:03:09.524315   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:03:09.524655   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:03:09.525708   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:03:09.526067   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:03:09.526145   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:03:09.526458   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:03:09.527180   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000 san=[127.0.0.1 172.28.239.31 ha-170000 localhost minikube]
	I0328 00:03:09.786947   13512 provision.go:177] copyRemoteCerts
	I0328 00:03:09.798987   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:03:09.798987   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:12.056732   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:12.056732   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:12.057308   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:14.770344   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:14.771453   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:14.771453   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:03:14.877447   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0784291s)
	I0328 00:03:14.877447   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:03:14.877447   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:03:14.928727   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:03:14.928849   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0328 00:03:14.980419   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:03:14.980419   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:03:15.046548   13512 provision.go:87] duration metric: took 15.5577428s to configureAuth
	I0328 00:03:15.046548   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:03:15.048004   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:03:15.048004   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:19.992844   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:19.992844   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:19.998086   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:19.999012   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:19.999012   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:03:20.140497   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:03:20.140497   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:03:20.140767   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:03:20.140767   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:22.376318   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:22.376318   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:22.376553   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:25.103964   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:25.103964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:25.109663   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:25.110103   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:25.110303   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:03:25.286594   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:03:25.286752   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:27.574098   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:27.574565   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:27.574565   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:30.306899   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:30.306899   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:30.313549   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:30.313549   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:30.314145   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:03:32.569540   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:03:32.569540   13512 machine.go:97] duration metric: took 48.5668388s to provisionDockerMachine
	I0328 00:03:32.569540   13512 client.go:171] duration metric: took 2m2.322416s to LocalClient.Create
	I0328 00:03:32.569540   13512 start.go:167] duration metric: took 2m2.322416s to libmachine.API.Create "ha-170000"
	I0328 00:03:32.570374   13512 start.go:293] postStartSetup for "ha-170000" (driver="hyperv")
	I0328 00:03:32.570374   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:03:32.583941   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:03:32.583941   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:34.838964   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:34.838964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:34.840284   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:37.533804   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:37.533804   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:37.534390   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:03:37.653466   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0694944s)
	I0328 00:03:37.666488   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:03:37.674674   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:03:37.674756   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:03:37.674955   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:03:37.676516   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:03:37.676516   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:03:37.688915   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:03:37.710497   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:03:37.765469   13512 start.go:296] duration metric: took 5.1950633s for postStartSetup
	I0328 00:03:37.768050   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:39.988163   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:39.988163   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:39.989240   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:42.727615   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:42.727615   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:42.727615   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:03:42.730928   13512 start.go:128] duration metric: took 2m12.4878102s to createHost
	I0328 00:03:42.731092   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:44.945675   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:44.945675   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:44.945750   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:47.666349   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:47.666499   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:47.672532   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:47.673046   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:47.673046   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:03:47.804700   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584227.820364817
	
	I0328 00:03:47.804792   13512 fix.go:216] guest clock: 1711584227.820364817
	I0328 00:03:47.804792   13512 fix.go:229] Guest: 2024-03-28 00:03:47.820364817 +0000 UTC Remote: 2024-03-28 00:03:42.7310925 +0000 UTC m=+138.490354701 (delta=5.089272317s)
	I0328 00:03:47.804928   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:50.113155   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:50.113189   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:50.113265   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:52.838643   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:52.838853   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:52.846732   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:52.847284   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:52.847444   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584227
	I0328 00:03:52.997164   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:03:47 UTC 2024
	
	I0328 00:03:52.997164   13512 fix.go:236] clock set: Thu Mar 28 00:03:47 UTC 2024
	 (err=<nil>)
	I0328 00:03:52.997164   13512 start.go:83] releasing machines lock for "ha-170000", held for 2m22.7546993s
	I0328 00:03:52.997800   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:57.965723   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:57.965723   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:57.970792   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:03:57.970953   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:57.984844   13512 ssh_runner.go:195] Run: cat /version.json
	I0328 00:03:57.985837   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:04:03.031101   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:04:03.031166   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:03.031166   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:04:03.053196   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:04:03.054255   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:03.054313   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:04:03.136308   13512 ssh_runner.go:235] Completed: cat /version.json: (5.1504398s)
	I0328 00:04:03.149467   13512 ssh_runner.go:195] Run: systemctl --version
	I0328 00:04:03.289274   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3181589s)
	I0328 00:04:03.301433   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:04:03.311571   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:04:03.325076   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:04:03.356978   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:04:03.357064   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:04:03.357224   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:04:03.407602   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:04:03.441947   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:04:03.463952   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:04:03.477193   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:04:03.513455   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:04:03.546805   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:04:03.583159   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:04:03.619690   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:04:03.653485   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:04:03.691356   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:04:03.727252   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:04:03.760867   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:04:03.792080   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:04:03.829094   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:04.045659   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:04:04.081034   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:04:04.094704   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:04:04.133499   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:04:04.173852   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:04:04.232198   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:04:04.274923   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:04:04.313688   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:04:04.380248   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:04:04.405439   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:04:04.453220   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:04:04.480017   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:04:04.501749   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:04:04.551064   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:04:04.788650   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:04:05.003448   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:04:05.003640   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:04:05.055223   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:05.278483   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:04:07.844702   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.566203s)
	I0328 00:04:07.858671   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:04:07.899386   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:04:07.936243   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:04:08.154217   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:04:08.389805   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:08.603241   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:04:08.648899   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:04:08.687517   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:08.926529   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:04:09.041005   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:04:09.055348   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:04:09.065504   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:04:09.081826   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:04:09.108354   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:04:09.198224   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:04:09.211099   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:04:09.261001   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:04:09.311043   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:04:09.311222   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:04:09.316263   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:04:09.316388   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:04:09.316435   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:04:09.316435   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:04:09.320275   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:04:09.320275   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:04:09.334106   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:04:09.343485   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:04:09.385359   13512 kubeadm.go:877] updating cluster {Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:04:09.385359   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:04:09.396679   13512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 00:04:09.425081   13512 docker.go:685] Got preloaded images: 
	I0328 00:04:09.425196   13512 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0328 00:04:09.439940   13512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 00:04:09.480958   13512 ssh_runner.go:195] Run: which lz4
	I0328 00:04:09.493141   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0328 00:04:09.507814   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 00:04:09.513931   13512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:04:09.513931   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0328 00:04:11.607629   13512 docker.go:649] duration metric: took 2.1141571s to copy over tarball
	I0328 00:04:11.624012   13512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:04:20.604850   13512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9807831s)
	I0328 00:04:20.604850   13512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:04:20.682270   13512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 00:04:20.707204   13512 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0328 00:04:20.756423   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:20.992312   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:04:23.872083   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.879686s)
	I0328 00:04:23.882512   13512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 00:04:23.908252   13512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 00:04:23.908252   13512 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:04:23.908252   13512 kubeadm.go:928] updating node { 172.28.239.31 8443 v1.29.3 docker true true} ...
	I0328 00:04:23.908793   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.239.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:04:23.919286   13512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 00:04:23.961434   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:04:23.961434   13512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 00:04:23.961434   13512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:04:23.961434   13512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.239.31 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170000 NodeName:ha-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.239.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.239.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:04:23.962178   13512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.239.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-170000"
	  kubeletExtraArgs:
	    node-ip: 172.28.239.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.239.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:04:23.962294   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:04:23.975889   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:04:24.004430   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:04:24.004751   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:04:24.017981   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:04:24.036154   13512 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:04:24.048165   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0328 00:04:24.068973   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0328 00:04:24.104129   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:04:24.139639   13512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0328 00:04:24.177582   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0328 00:04:24.227236   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:04:24.235219   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:04:24.273917   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:24.506990   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:04:24.540006   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.239.31
	I0328 00:04:24.540067   13512 certs.go:194] generating shared ca certs ...
	I0328 00:04:24.540067   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.540349   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:04:24.540349   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:04:24.540349   13512 certs.go:256] generating profile certs ...
	I0328 00:04:24.541974   13512 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:04:24.541974   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt with IP's: []
	I0328 00:04:24.889732   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt ...
	I0328 00:04:24.889732   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt: {Name:mkbdb6d224105d9846941bd7ef796bab37cf0d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.891476   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key ...
	I0328 00:04:24.891476   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key: {Name:mkc77ecfd07cf7c3fc46df723d6f544069ea69a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.892258   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec
	I0328 00:04:24.892258   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.239.254]
	I0328 00:04:25.007256   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec ...
	I0328 00:04:25.007256   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec: {Name:mkcb18f777d1e527b25f5e2d8323733bcddf4084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.008261   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec ...
	I0328 00:04:25.008261   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec: {Name:mkf6e652cffa73383c36ee164b4d394733a7b5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.009975   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:04:25.021306   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:04:25.022298   13512 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:04:25.022298   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt with IP's: []
	I0328 00:04:25.110902   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt ...
	I0328 00:04:25.110902   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt: {Name:mkacef89a3d7b6653974b337f3650724fbf38da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.112847   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key ...
	I0328 00:04:25.112847   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key: {Name:mkde445a6144006913f807287c915aaab44c2514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.113116   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:04:25.114052   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:04:25.114946   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:04:25.115098   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:04:25.130129   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:04:25.131047   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:04:25.131489   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:04:25.131489   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:04:25.131864   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:04:25.132110   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:04:25.132363   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:04:25.132363   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:25.133921   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:04:25.189389   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:04:25.241672   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:04:25.297242   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:04:25.350750   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 00:04:25.405806   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:04:25.458162   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:04:25.508928   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:04:25.557284   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:04:25.609275   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:04:25.663990   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:04:25.713805   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:04:25.760702   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:04:25.784698   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:04:25.820345   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.827272   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.840635   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.864081   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:04:25.900491   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:04:25.939189   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.948137   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.966079   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.992570   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:04:26.030316   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:04:26.067296   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.077075   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.091019   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.114155   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:04:26.147424   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:04:26.155773   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:04:26.156426   13512 kubeadm.go:391] StartCluster: {Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:04:26.167324   13512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 00:04:26.206480   13512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 00:04:26.238384   13512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:04:26.272794   13512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:04:26.298496   13512 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:04:26.298496   13512 kubeadm.go:156] found existing configuration files:
	
	I0328 00:04:26.313645   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:04:26.335191   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:04:26.348467   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:04:26.380500   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:04:26.401563   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:04:26.415106   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:04:26.446778   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:04:26.463679   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:04:26.477030   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:04:26.509373   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:04:26.528993   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:04:26.542127   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:04:26.563204   13512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:04:27.068553   13512 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:04:44.050285   13512 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 00:04:44.050285   13512 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:04:44.050285   13512 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:04:44.050977   13512 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:04:44.051256   13512 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:04:44.051256   13512 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:04:44.057027   13512 out.go:204]   - Generating certificates and keys ...
	I0328 00:04:44.057295   13512 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:04:44.057488   13512 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:04:44.057608   13512 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 00:04:44.058263   13512 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 00:04:44.058498   13512 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-170000 localhost] and IPs [172.28.239.31 127.0.0.1 ::1]
	I0328 00:04:44.058601   13512 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 00:04:44.058804   13512 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-170000 localhost] and IPs [172.28.239.31 127.0.0.1 ::1]
	I0328 00:04:44.058888   13512 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 00:04:44.058990   13512 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 00:04:44.059083   13512 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 00:04:44.059123   13512 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:04:44.059800   13512 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:04:44.059919   13512 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:04:44.063116   13512 out.go:204]   - Booting up control plane ...
	I0328 00:04:44.063116   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:04:44.063116   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:04:44.063804   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:04:44.064082   13512 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:04:44.064216   13512 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:04:44.064216   13512 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:04:44.064216   13512 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:04:44.064861   13512 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.581290 seconds
	I0328 00:04:44.065265   13512 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 00:04:44.065579   13512 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 00:04:44.065770   13512 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 00:04:44.065770   13512 kubeadm.go:309] [mark-control-plane] Marking the node ha-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 00:04:44.065770   13512 kubeadm.go:309] [bootstrap-token] Using token: bbl8hi.q2n8vw1p7nxt5s93
	I0328 00:04:44.069132   13512 out.go:204]   - Configuring RBAC rules ...
	I0328 00:04:44.069191   13512 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 00:04:44.069191   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 00:04:44.069900   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 00:04:44.070801   13512 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 00:04:44.071001   13512 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 00:04:44.071001   13512 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 00:04:44.071700   13512 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 00:04:44.071817   13512 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 00:04:44.071817   13512 kubeadm.go:309] 
	I0328 00:04:44.072036   13512 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 00:04:44.072036   13512 kubeadm.go:309] 
	I0328 00:04:44.072243   13512 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 00:04:44.072243   13512 kubeadm.go:309] 
	I0328 00:04:44.072583   13512 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 00:04:44.072789   13512 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 00:04:44.072789   13512 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 00:04:44.073021   13512 kubeadm.go:309] 
	I0328 00:04:44.073191   13512 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 00:04:44.073387   13512 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 00:04:44.073387   13512 kubeadm.go:309] 
	I0328 00:04:44.073607   13512 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bbl8hi.q2n8vw1p7nxt5s93 \
	I0328 00:04:44.073811   13512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a \
	I0328 00:04:44.073811   13512 kubeadm.go:309] 	--control-plane 
	I0328 00:04:44.073811   13512 kubeadm.go:309] 
	I0328 00:04:44.073811   13512 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 00:04:44.073811   13512 kubeadm.go:309] 
	I0328 00:04:44.074416   13512 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bbl8hi.q2n8vw1p7nxt5s93 \
	I0328 00:04:44.074416   13512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a 
	I0328 00:04:44.074689   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:04:44.074689   13512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 00:04:44.079039   13512 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 00:04:44.094935   13512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 00:04:44.104368   13512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 00:04:44.104368   13512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 00:04:44.180388   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 00:04:44.887312   13512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 00:04:44.902050   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000 minikube.k8s.io/updated_at=2024_03_28T00_04_44_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=true
	I0328 00:04:44.902875   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:44.919130   13512 ops.go:34] apiserver oom_adj: -16
	I0328 00:04:45.221398   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:45.730101   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:46.233081   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:46.734954   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:47.222864   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:47.729957   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:48.226894   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:48.734611   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:49.233999   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:49.722783   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:50.237639   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:50.730152   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:51.236596   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:51.724284   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:52.229512   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:52.736471   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:53.229735   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:53.722544   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:54.229320   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:54.735039   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:55.235745   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:55.723650   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.231817   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.724414   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.874418   13512 kubeadm.go:1107] duration metric: took 11.9869745s to wait for elevateKubeSystemPrivileges
	W0328 00:04:56.874418   13512 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 00:04:56.874418   13512 kubeadm.go:393] duration metric: took 30.7178043s to StartCluster
	I0328 00:04:56.874418   13512 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:56.874418   13512 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:04:56.877203   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:56.878666   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 00:04:56.878763   13512 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:04:56.878763   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:04:56.878889   13512 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 00:04:56.878994   13512 addons.go:69] Setting storage-provisioner=true in profile "ha-170000"
	I0328 00:04:56.878994   13512 addons.go:69] Setting default-storageclass=true in profile "ha-170000"
	I0328 00:04:56.878994   13512 addons.go:234] Setting addon storage-provisioner=true in "ha-170000"
	I0328 00:04:56.879107   13512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-170000"
	I0328 00:04:56.879147   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:04:56.879398   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:04:56.881038   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:56.881341   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:57.053117   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 00:04:57.696972   13512 start.go:948] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0328 00:04:59.262245   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:59.262524   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:59.265237   13512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:04:59.262596   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:59.265278   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:59.265907   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:04:59.267542   13512 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:04:59.267542   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 00:04:59.267542   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:59.268231   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 00:04:59.269341   13512 cert_rotation.go:137] Starting client certificate rotation controller
	I0328 00:04:59.269341   13512 addons.go:234] Setting addon default-storageclass=true in "ha-170000"
	I0328 00:04:59.269976   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:04:59.270802   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:05:01.631356   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:01.631356   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:01.631583   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:01.748305   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:01.749007   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:01.749081   13512 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 00:05:01.749081   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 00:05:01.749081   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:05:04.071353   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:04.071476   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:04.071541   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:04.476614   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:05:04.625714   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:04.626530   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:05:04.779520   13512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:05:06.863579   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:05:06.863579   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:06.864650   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:05:07.008495   13512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:05:07.250980   13512 round_trippers.go:463] GET https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0328 00:05:07.250980   13512 round_trippers.go:469] Request Headers:
	I0328 00:05:07.250980   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:05:07.250980   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:05:07.265874   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:05:07.268022   13512 round_trippers.go:463] PUT https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0328 00:05:07.268022   13512 round_trippers.go:469] Request Headers:
	I0328 00:05:07.268022   13512 round_trippers.go:473]     Content-Type: application/json
	I0328 00:05:07.268022   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:05:07.268022   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:05:07.275661   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:05:07.280569   13512 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0328 00:05:07.283444   13512 addons.go:505] duration metric: took 10.4044913s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0328 00:05:07.283444   13512 start.go:245] waiting for cluster config update ...
	I0328 00:05:07.283444   13512 start.go:254] writing updated cluster config ...
	I0328 00:05:07.285862   13512 out.go:177] 
	I0328 00:05:07.297882   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:05:07.298076   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:05:07.304011   13512 out.go:177] * Starting "ha-170000-m02" control-plane node in "ha-170000" cluster
	I0328 00:05:07.306748   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:05:07.306808   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:05:07.307204   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:05:07.307371   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:05:07.307650   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:05:07.314009   13512 start.go:360] acquireMachinesLock for ha-170000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:05:07.314009   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000-m02"
	I0328 00:05:07.314009   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:05:07.314009   13512 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0328 00:05:07.319405   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:05:07.319405   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:05:07.319405   13512 client.go:168] LocalClient.Create starting
	I0328 00:05:07.320428   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:05:07.321126   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:05:07.321126   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:05:07.321126   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:05:11.305061   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:05:11.306046   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:11.306206   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:05:12.914979   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:05:12.915461   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:12.915523   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:05:16.903950   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:05:16.903950   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:16.906660   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:05:17.446540   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:05:17.511172   13512 main.go:141] libmachine: Creating VM...
	I0328 00:05:17.511172   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:20.612723   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:22.578617   13512 main.go:141] libmachine: Creating VHD
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:05:26.537687   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 076652CA-7F4B-4D65-839E-2816676E6A32
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:05:26.538010   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:26.538137   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:05:26.538137   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:05:26.538927   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd' -SizeBytes 20000MB
	I0328 00:05:32.515821   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:32.516843   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:32.516843   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-170000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000-m02 -DynamicMemoryEnabled $false
	I0328 00:05:38.754580   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:38.754580   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:38.754850   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000-m02 -Count 2
	I0328 00:05:41.099786   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:41.100212   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:41.100325   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\boot2docker.iso'
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd'
	I0328 00:05:46.682875   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:46.683421   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:46.683421   13512 main.go:141] libmachine: Starting VM...
	I0328 00:05:46.683421   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000-m02
	I0328 00:05:49.931694   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:49.931694   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:49.931694   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:05:49.931899   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:55.018777   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:55.018777   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:56.028175   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:01.108016   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:01.108016   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:02.119779   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:07.157667   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:07.157667   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:08.162347   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:10.468190   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:10.469004   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:10.469065   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:13.157313   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:13.157313   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:14.172498   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:19.333183   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:19.333998   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:19.333998   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:21.590768   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:21.590768   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:21.590768   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:06:21.591673   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:26.667444   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:26.667444   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:26.674043   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:26.674258   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:26.674258   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:06:26.811696   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:06:26.811696   13512 buildroot.go:166] provisioning hostname "ha-170000-m02"
	I0328 00:06:26.811696   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:29.131444   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:29.131444   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:29.131765   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:31.837883   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:31.837883   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:31.845490   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:31.846118   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:31.846118   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000-m02 && echo "ha-170000-m02" | sudo tee /etc/hostname
	I0328 00:06:32.030205   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000-m02
	
	I0328 00:06:32.030264   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:34.332975   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:34.332975   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:34.333082   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:37.053625   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:37.054770   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:37.060282   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:37.060282   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:37.060921   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:06:37.223529   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:06:37.223529   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:06:37.223529   13512 buildroot.go:174] setting up certificates
	I0328 00:06:37.223529   13512 provision.go:84] configureAuth start
	I0328 00:06:37.223529   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:42.281656   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:42.282072   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:42.282148   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:47.273230   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:47.274092   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:47.274227   13512 provision.go:143] copyHostCerts
	I0328 00:06:47.274420   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:06:47.274730   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:06:47.274730   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:06:47.275170   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:06:47.276372   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:06:47.276812   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:06:47.276940   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:06:47.277407   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:06:47.278410   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:06:47.278692   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:06:47.278768   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:06:47.279100   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:06:47.279971   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000-m02 san=[127.0.0.1 172.28.224.3 ha-170000-m02 localhost minikube]
	I0328 00:06:47.524734   13512 provision.go:177] copyRemoteCerts
	I0328 00:06:47.540342   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:06:47.540444   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:49.853777   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:49.854847   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:49.854977   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:52.656964   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:52.656964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:52.657733   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:06:52.778676   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2383016s)
	I0328 00:06:52.778676   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:06:52.778676   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:06:52.829546   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:06:52.830230   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 00:06:52.883823   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:06:52.884465   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:06:52.937921   13512 provision.go:87] duration metric: took 15.714296s to configureAuth
	I0328 00:06:52.937921   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:06:52.938614   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:06:52.938614   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:57.962599   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:57.963410   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:57.969438   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:57.970172   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:57.970172   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:06:58.115270   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:06:58.115270   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:06:58.115270   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:06:58.115532   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:00.418973   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:00.419513   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:00.419581   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:03.119785   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:03.120911   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:03.126511   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:03.127324   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:03.127324   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.239.31"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:07:03.298421   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.239.31
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:07:03.298421   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:05.554691   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:05.555514   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:05.555584   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:08.333940   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:08.334948   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:08.341896   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:08.342516   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:08.342701   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:07:10.618646   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:07:10.618873   13512 machine.go:97] duration metric: took 49.0270489s to provisionDockerMachine
	I0328 00:07:10.618949   13512 client.go:171] duration metric: took 2m3.2987162s to LocalClient.Create
	I0328 00:07:10.619019   13512 start.go:167] duration metric: took 2m3.2988614s to libmachine.API.Create "ha-170000"
	I0328 00:07:10.619019   13512 start.go:293] postStartSetup for "ha-170000-m02" (driver="hyperv")
	I0328 00:07:10.619019   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:07:10.635634   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:07:10.635634   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:15.678792   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:15.678792   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:15.679454   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:15.789362   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.153697s)
	I0328 00:07:15.803473   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:07:15.810731   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:07:15.810731   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:07:15.810731   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:07:15.811683   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:07:15.811683   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:07:15.826457   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:07:15.848393   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:07:15.901736   13512 start.go:296] duration metric: took 5.2826853s for postStartSetup
	I0328 00:07:15.905918   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:18.239812   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:18.239812   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:18.240530   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:21.023021   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:21.023021   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:21.023021   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:07:21.026324   13512 start.go:128] duration metric: took 2m13.7114986s to createHost
	I0328 00:07:21.026435   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:26.034219   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:26.034219   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:26.039833   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:26.040607   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:26.040607   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:07:26.183476   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584446.187547843
	
	I0328 00:07:26.183808   13512 fix.go:216] guest clock: 1711584446.187547843
	I0328 00:07:26.183808   13512 fix.go:229] Guest: 2024-03-28 00:07:26.187547843 +0000 UTC Remote: 2024-03-28 00:07:21.0264354 +0000 UTC m=+356.784366001 (delta=5.161112443s)
	I0328 00:07:26.183808   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:31.201082   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:31.201195   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:31.208394   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:31.209080   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:31.209080   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584446
	I0328 00:07:31.367297   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:07:26 UTC 2024
	
	I0328 00:07:31.368063   13512 fix.go:236] clock set: Thu Mar 28 00:07:26 UTC 2024
	 (err=<nil>)
	I0328 00:07:31.368157   13512 start.go:83] releasing machines lock for "ha-170000-m02", held for 2m24.0531746s
	I0328 00:07:31.368403   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:33.694120   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:33.694120   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:33.694288   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:36.422801   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:36.422801   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:36.426247   13512 out.go:177] * Found network options:
	I0328 00:07:36.429394   13512 out.go:177]   - NO_PROXY=172.28.239.31
	W0328 00:07:36.431972   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:07:36.434677   13512 out.go:177]   - NO_PROXY=172.28.239.31
	W0328 00:07:36.437169   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:07:36.438588   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:07:36.441320   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:07:36.441320   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:36.451302   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:07:36.451302   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:38.751450   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:38.751662   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:38.751662   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:38.773029   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:38.774019   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:38.774101   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:41.575039   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:41.575039   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:41.576523   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:41.603110   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:41.603215   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:41.603770   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:41.674730   13512 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2232981s)
	W0328 00:07:41.674730   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:07:41.687428   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:07:41.763236   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:07:41.763355   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:07:41.763236   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3218829s)
	I0328 00:07:41.763599   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:07:41.817616   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:07:41.852851   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:07:41.872994   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:07:41.885629   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:07:41.923626   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:07:41.960487   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:07:41.995368   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:07:42.034518   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:07:42.071006   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:07:42.103255   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:07:42.136287   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:07:42.175670   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:07:42.210074   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:07:42.244740   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:42.463312   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:07:42.500475   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:07:42.515066   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:07:42.553167   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:07:42.594906   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:07:42.643785   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:07:42.681262   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:07:42.718178   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:07:42.783718   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:07:42.812479   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:07:42.866745   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:07:42.889106   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:07:42.910627   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:07:42.962501   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:07:43.179611   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:07:43.400250   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:07:43.400250   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:07:43.449535   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:43.679147   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:07:46.250045   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5708825s)
	I0328 00:07:46.262711   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:07:46.301215   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:07:46.339722   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:07:46.568199   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:07:46.794560   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:47.030753   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:07:47.080010   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:07:47.119907   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:47.337692   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:07:47.466608   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:07:47.479640   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:07:47.491643   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:07:47.504248   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:07:47.524970   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:07:47.610796   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:07:47.620775   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:07:47.663691   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:07:47.703279   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:07:47.706887   13512 out.go:177]   - env NO_PROXY=172.28.239.31
	I0328 00:07:47.709891   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:07:47.717912   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:07:47.717912   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:07:47.729907   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:07:47.737125   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:07:47.761024   13512 mustload.go:65] Loading cluster: ha-170000
	I0328 00:07:47.761139   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:07:47.762261   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:07:50.028825   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:50.028825   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:50.029428   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:07:50.030370   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.224.3
	I0328 00:07:50.030409   13512 certs.go:194] generating shared ca certs ...
	I0328 00:07:50.030409   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.030998   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:07:50.031532   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:07:50.031859   13512 certs.go:256] generating profile certs ...
	I0328 00:07:50.032046   13512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:07:50.032046   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e
	I0328 00:07:50.032873   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.224.3 172.28.239.254]
	I0328 00:07:50.216254   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e ...
	I0328 00:07:50.216254   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e: {Name:mkbc210cc81156f002a806a051ff57fc39befd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.217662   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e ...
	I0328 00:07:50.217662   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e: {Name:mke26bef036ed69d4e4700d974f12ab136fbdff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.219610   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:07:50.232866   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:07:50.233437   13512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:07:50.234450   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:07:50.234638   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:07:50.234889   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:07:50.234889   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:07:50.235284   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:07:50.235463   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:07:50.235752   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:07:50.235752   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:07:50.236244   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:07:50.237112   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:07:50.237278   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:07:50.237824   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:07:50.238382   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:07:50.238413   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:07:50.239331   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:07:50.240598   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:07:52.504337   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:52.504337   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:52.504924   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:55.257423   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:07:55.257423   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:55.257423   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:07:55.366960   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0328 00:07:55.375363   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0328 00:07:55.409034   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0328 00:07:55.417029   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0328 00:07:55.449871   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0328 00:07:55.457980   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0328 00:07:55.492079   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0328 00:07:55.500387   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0328 00:07:55.534896   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0328 00:07:55.542786   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0328 00:07:55.580128   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0328 00:07:55.587932   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0328 00:07:55.614540   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:07:55.672567   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:07:55.726887   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:07:55.787545   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:07:55.842747   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0328 00:07:55.896133   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:07:55.953949   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:07:56.005802   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:07:56.054356   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:07:56.108316   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:07:56.159317   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:07:56.210148   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0328 00:07:56.243540   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0328 00:07:56.280008   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0328 00:07:56.316776   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0328 00:07:56.350633   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0328 00:07:56.382606   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0328 00:07:56.415104   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0328 00:07:56.467559   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:07:56.496461   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:07:56.532306   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.541192   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.555348   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.578094   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:07:56.614533   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:07:56.647429   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.654961   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.670271   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.692656   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:07:56.728495   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:07:56.761630   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.770348   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.784274   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.812857   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:07:56.847038   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:07:56.855653   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:07:56.855860   13512 kubeadm.go:928] updating node {m02 172.28.224.3 8443 v1.29.3 docker true true} ...
	I0328 00:07:56.856042   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.224.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:07:56.856138   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:07:56.869755   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:07:56.897488   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:07:56.897916   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:07:56.910850   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:07:56.929454   13512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0328 00:07:56.943975   13512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0328 00:07:56.967618   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet
	I0328 00:07:56.968087   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm
	I0328 00:07:56.968087   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl
	I0328 00:07:58.111339   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:07:58.123061   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:07:58.139077   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 00:07:58.139356   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0328 00:07:58.186374   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:07:58.198369   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:07:58.266432   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 00:07:58.266876   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0328 00:07:59.003371   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:07:59.032007   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:07:59.049904   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:07:59.058087   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 00:07:59.058392   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0328 00:07:59.721862   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0328 00:07:59.743466   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0328 00:07:59.778259   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:07:59.813445   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:07:59.865215   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:07:59.873961   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:07:59.913405   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:08:00.143254   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:08:00.174620   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:08:00.174906   13512 start.go:316] joinCluster: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:08:00.175545   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0328 00:08:00.175545   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:08:02.389273   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:08:02.389273   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:08:02.389733   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:08:05.110580   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:08:05.110848   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:08:05.111401   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:08:05.350405   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1748285s)
	I0328 00:08:05.350496   13512 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:08:05.350582   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2u3x2e.vtauwqwzkqqj4wk1 --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m02 --control-plane --apiserver-advertise-address=172.28.224.3 --apiserver-bind-port=8443"
	I0328 00:08:55.545770   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2u3x2e.vtauwqwzkqqj4wk1 --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m02 --control-plane --apiserver-advertise-address=172.28.224.3 --apiserver-bind-port=8443": (50.1948777s)
	I0328 00:08:55.545982   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0328 00:08:56.572142   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.026106s)
	I0328 00:08:56.585707   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000-m02 minikube.k8s.io/updated_at=2024_03_28T00_08_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=false
	I0328 00:08:56.773565   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0328 00:08:56.948520   13512 start.go:318] duration metric: took 56.7732633s to joinCluster
	I0328 00:08:56.948778   13512 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:08:56.951924   13512 out.go:177] * Verifying Kubernetes components...
	I0328 00:08:56.949474   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:08:56.969533   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:08:57.441033   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:08:57.492077   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:08:57.493117   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0328 00:08:57.493197   13512 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.239.31:8443
	I0328 00:08:57.493747   13512 node_ready.go:35] waiting up to 6m0s for node "ha-170000-m02" to be "Ready" ...
	I0328 00:08:57.494335   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:57.494404   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:57.494404   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:57.494404   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:57.510313   13512 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0328 00:08:58.009266   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:58.009266   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:58.009266   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:58.009266   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:58.014467   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:08:58.500133   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:58.500221   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:58.500221   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:58.500221   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:58.506830   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:08:59.005105   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.005464   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.005544   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.005568   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:59.015001   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:08:59.494552   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.494552   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.494552   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.494552   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:59.499552   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:08:59.500343   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:08:59.997110   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.997110   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.997110   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.997110   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:00.002973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:00.501969   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:00.501969   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:00.501969   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:00.501969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:00.506627   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:00.994982   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:00.995033   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:00.995033   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:00.995033   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:01.000668   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:01.501526   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:01.501526   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:01.501526   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:01.501526   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:01.509135   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:01.510821   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:02.007836   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:02.007912   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:02.007912   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:02.007912   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:02.013664   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:02.497593   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:02.497691   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:02.497691   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:02.497691   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:02.504768   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:03.006068   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.006139   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.006139   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.006139   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:03.295318   13512 round_trippers.go:574] Response Status: 200 OK in 288 milliseconds
	I0328 00:09:03.506405   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.506463   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.506463   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.506463   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:03.511672   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:03.512532   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:03.994483   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.994616   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.994681   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.994681   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:04.000163   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:04.497764   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:04.497825   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:04.497825   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:04.497825   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:04.504490   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:04.999468   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:04.999568   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:04.999568   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:04.999568   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:05.005516   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:05.504541   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:05.504541   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:05.504541   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:05.504541   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:05.510072   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:06.008969   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:06.008969   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:06.008969   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:06.008969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:06.016375   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:06.017292   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:06.502661   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:06.502661   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:06.502661   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:06.502661   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:06.508555   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:07.006915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.006975   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.006975   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.006975   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:07.013757   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:07.508439   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.508439   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.508439   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.508439   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:07.513891   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:07.995347   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.995347   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.995347   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.995347   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.003942   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:08.005281   13512 node_ready.go:49] node "ha-170000-m02" has status "Ready":"True"
	I0328 00:09:08.005281   13512 node_ready.go:38] duration metric: took 10.511469s for node "ha-170000-m02" to be "Ready" ...
	I0328 00:09:08.005357   13512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:09:08.005524   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:08.005524   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.005524   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.005524   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.013834   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:08.022887   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.022887   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-5npq4
	I0328 00:09:08.022887   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.022887   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.022887   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.027691   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.029489   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.029489   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.029608   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.029608   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.033702   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.034629   13512 pod_ready.go:92] pod "coredns-76f75df574-5npq4" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.034629   13512 pod_ready.go:81] duration metric: took 11.7424ms for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.034629   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.034629   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-mgrhj
	I0328 00:09:08.034629   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.034629   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.034629   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.038982   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.040076   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.040076   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.040076   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.040076   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.045395   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:08.047072   13512 pod_ready.go:92] pod "coredns-76f75df574-mgrhj" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.047072   13512 pod_ready.go:81] duration metric: took 12.4424ms for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.047155   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.047217   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000
	I0328 00:09:08.047217   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.047217   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.047217   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.062731   13512 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0328 00:09:08.064273   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.064381   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.064381   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.064381   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.067761   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:09:08.069552   13512 pod_ready.go:92] pod "etcd-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.069552   13512 pod_ready.go:81] duration metric: took 22.3969ms for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.069552   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.069786   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:08.069786   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.069786   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.069786   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.074176   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.075213   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:08.075213   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.075213   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.075213   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.080025   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.576937   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:08.576937   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.577039   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.577039   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.583362   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:08.584315   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:08.584417   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.584417   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.584417   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.588793   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:09.085472   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:09.085472   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.085472   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.085472   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.090004   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:09.092184   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:09.092184   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.092184   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.092331   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.108495   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:09:09.581701   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:09.581918   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.581918   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.581918   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.588225   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:09.589829   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:09.589862   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.589862   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.589862   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.594801   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:10.072783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:10.072919   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.072919   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.072919   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.078373   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:10.080683   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:10.080756   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.080756   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.080756   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.085417   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:10.086349   13512 pod_ready.go:102] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"False"
	I0328 00:09:10.580809   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:10.580928   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.580928   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.580928   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.595269   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:09:10.596905   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:10.596972   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.596972   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.596972   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.600950   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:09:11.069987   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:11.069987   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.069987   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.069987   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.076105   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:11.076996   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.076996   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.076996   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.076996   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.081993   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:11.083508   13512 pod_ready.go:92] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.083571   13512 pod_ready.go:81] duration metric: took 3.0139995s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.083571   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.083626   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:09:11.083626   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.083626   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.083626   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.088192   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:11.089192   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.089192   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.089192   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.089192   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.095227   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:11.096247   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.096247   13512 pod_ready.go:81] duration metric: took 12.6762ms for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.096247   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.096247   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:09:11.096247   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.096247   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.096247   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.101280   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:11.210636   13512 request.go:629] Waited for 107.4398ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.210738   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.210738   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.210738   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.210738   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.220758   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:09:11.221499   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.221499   13512 pod_ready.go:81] duration metric: took 125.2509ms for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.221499   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.397809   13512 request.go:629] Waited for 176.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:09:11.397950   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:09:11.397994   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.397994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.397994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.403406   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:11.600959   13512 request.go:629] Waited for 196.1563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.601208   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.601208   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.601208   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.601208   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.610580   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:11.610580   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.610580   13512 pod_ready.go:81] duration metric: took 389.0793ms for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.610580   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.803766   13512 request.go:629] Waited for 192.156ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:09:11.803880   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:09:11.803880   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.803880   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.804056   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.809478   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.005669   13512 request.go:629] Waited for 195.0792ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.005910   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.005910   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.005910   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.005910   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.010993   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:12.012190   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.012237   13512 pod_ready.go:81] duration metric: took 401.6539ms for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.012237   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.207324   13512 request.go:629] Waited for 195.0862ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:09:12.207324   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:09:12.207324   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.207324   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.207324   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.212918   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.409814   13512 request.go:629] Waited for 195.4469ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:12.410372   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:12.410372   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.410372   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.410372   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.415722   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.417482   13512 pod_ready.go:92] pod "kube-proxy-w2z74" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.417608   13512 pod_ready.go:81] duration metric: took 405.3683ms for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.417608   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.598869   13512 request.go:629] Waited for 181.0726ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:09:12.599176   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:09:12.599176   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.599176   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.599176   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.604624   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.801254   13512 request.go:629] Waited for 194.9639ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.801513   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.801513   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.801513   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.801513   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.806837   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.808030   13512 pod_ready.go:92] pod "kube-proxy-wrvmg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.808030   13512 pod_ready.go:81] duration metric: took 390.4204ms for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.808607   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.006922   13512 request.go:629] Waited for 198.313ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:09:13.007252   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:09:13.007252   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.007252   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.007252   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.013306   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:13.195631   13512 request.go:629] Waited for 180.9582ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:13.195740   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:13.195740   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.195909   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.195909   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.201619   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:13.202988   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:13.202988   13512 pod_ready.go:81] duration metric: took 394.3786ms for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.203061   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.398942   13512 request.go:629] Waited for 195.8226ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:09:13.399172   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:09:13.399172   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.399294   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.399294   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.408451   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.603041   13512 request.go:629] Waited for 192.7566ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:13.603211   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:13.603211   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.603211   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.603211   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.613103   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.614241   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:13.614241   13512 pod_ready.go:81] duration metric: took 411.1768ms for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.614315   13512 pod_ready.go:38] duration metric: took 5.6089231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:09:13.614373   13512 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:09:13.627313   13512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:09:13.660497   13512 api_server.go:72] duration metric: took 16.7115223s to wait for apiserver process to appear ...
	I0328 00:09:13.660497   13512 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:09:13.660497   13512 api_server.go:253] Checking apiserver healthz at https://172.28.239.31:8443/healthz ...
	I0328 00:09:13.672402   13512 api_server.go:279] https://172.28.239.31:8443/healthz returned 200:
	ok
	I0328 00:09:13.672402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/version
	I0328 00:09:13.672402   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.672402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.672402   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.672951   13512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0328 00:09:13.672951   13512 api_server.go:141] control plane version: v1.29.3
	I0328 00:09:13.672951   13512 api_server.go:131] duration metric: took 12.4531ms to wait for apiserver health ...
	I0328 00:09:13.672951   13512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:09:13.805680   13512 request.go:629] Waited for 132.443ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:13.805680   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:13.805680   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.805813   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.805813   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.815249   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.824109   13512 system_pods.go:59] 17 kube-system pods found
	I0328 00:09:13.824188   13512 system_pods.go:61] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:09:13.824264   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:09:13.824264   13512 system_pods.go:61] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:09:13.824307   13512 system_pods.go:74] duration metric: took 151.3551ms to wait for pod list to return data ...
	I0328 00:09:13.824307   13512 default_sa.go:34] waiting for default service account to be created ...
	I0328 00:09:14.011087   13512 request.go:629] Waited for 186.5671ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:09:14.011362   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:09:14.011362   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.011362   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.011428   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.016313   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:14.016440   13512 default_sa.go:45] found service account: "default"
	I0328 00:09:14.016440   13512 default_sa.go:55] duration metric: took 192.132ms for default service account to be created ...
	I0328 00:09:14.016440   13512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 00:09:14.199130   13512 request.go:629] Waited for 182.5328ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:14.199308   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:14.199423   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.199423   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.199423   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.209138   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:14.217659   13512 system_pods.go:86] 17 kube-system pods found
	I0328 00:09:14.217711   13512 system_pods.go:89] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:09:14.217711   13512 system_pods.go:126] duration metric: took 201.2702ms to wait for k8s-apps to be running ...
	I0328 00:09:14.217711   13512 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 00:09:14.232445   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:09:14.263498   13512 system_svc.go:56] duration metric: took 45.7864ms WaitForService to wait for kubelet
	I0328 00:09:14.263498   13512 kubeadm.go:576] duration metric: took 17.3145191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:09:14.263498   13512 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:09:14.405125   13512 request.go:629] Waited for 141.3461ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes
	I0328 00:09:14.405340   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes
	I0328 00:09:14.405340   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.405340   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.405404   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.415830   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:09:14.416431   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:09:14.416431   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:09:14.416431   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:09:14.416431   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:09:14.416431   13512 node_conditions.go:105] duration metric: took 152.932ms to run NodePressure ...
	I0328 00:09:14.416431   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:09:14.416431   13512 start.go:254] writing updated cluster config ...
	I0328 00:09:14.421256   13512 out.go:177] 
	I0328 00:09:14.434298   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:09:14.434298   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:09:14.451802   13512 out.go:177] * Starting "ha-170000-m03" control-plane node in "ha-170000" cluster
	I0328 00:09:14.453792   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:09:14.453792   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:09:14.454785   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:09:14.454785   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:09:14.456893   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:09:14.458804   13512 start.go:360] acquireMachinesLock for ha-170000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:09:14.458804   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000-m03"
	I0328 00:09:14.458804   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:09:14.459791   13512 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0328 00:09:14.462807   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:09:14.462807   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:09:14.462807   13512 client.go:168] LocalClient.Create starting
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:09:14.464792   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:09:14.464792   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:09:14.464792   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:09:16.606439   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:09:16.606439   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:16.607241   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:09:18.581607   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:09:18.582547   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:18.582547   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:09:20.215435   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:09:20.215556   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:20.215556   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:09:24.361024   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:09:24.361024   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:24.363440   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:09:24.896896   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:09:24.969037   13512 main.go:141] libmachine: Creating VM...
	I0328 00:09:24.969037   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:09:28.120241   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:09:28.121165   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:28.121436   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:09:28.121565   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:09:30.042061   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:09:30.042116   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:30.042257   13512 main.go:141] libmachine: Creating VHD
	I0328 00:09:30.042345   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:09:34.040978   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AF977A12-6A66-403E-BF63-8FC75EA3BF37
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:09:34.041977   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:34.042033   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:09:34.042063   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:09:34.052783   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:09:37.354393   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:37.354629   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:37.354629   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd' -SizeBytes 20000MB
	I0328 00:09:40.075685   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:40.075801   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:40.075801   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:09:44.605015   13512 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-170000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:09:44.605083   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:44.605248   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000-m03 -DynamicMemoryEnabled $false
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000-m03 -Count 2
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\boot2docker.iso'
	I0328 00:09:52.210166   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:52.210518   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:52.210634   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd'
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:55.070745   13512 main.go:141] libmachine: Starting VM...
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000-m03
	I0328 00:09:58.341654   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:58.342255   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:58.342288   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:09:58.342345   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:00.768092   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:00.768963   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:00.769067   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:03.439413   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:03.440106   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:04.447876   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:09.459643   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:09.459643   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:10.470763   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:12.809573   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:12.809634   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:12.809716   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:15.521240   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:15.521301   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:16.527268   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:18.884599   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:18.885246   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:18.885394   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:21.583674   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:21.583674   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:22.589099   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:24.949863   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:24.950291   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:24.950291   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:27.701348   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:27.701348   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:27.701701   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:30.023831   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:30.023831   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:30.023831   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:10:30.024847   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:35.136968   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:35.136968   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:35.142467   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:35.142529   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:35.143063   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:10:35.272914   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:10:35.272914   13512 buildroot.go:166] provisioning hostname "ha-170000-m03"
	I0328 00:10:35.272914   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:40.329320   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:40.330061   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:40.335836   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:40.336409   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:40.336409   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000-m03 && echo "ha-170000-m03" | sudo tee /etc/hostname
	I0328 00:10:40.494672   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000-m03
	
	I0328 00:10:40.494783   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:42.820519   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:42.820688   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:42.820760   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:45.648838   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:45.649744   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:45.655517   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:45.656049   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:45.656049   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:10:45.801301   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:10:45.801301   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:10:45.801301   13512 buildroot.go:174] setting up certificates
	I0328 00:10:45.801301   13512 provision.go:84] configureAuth start
	I0328 00:10:45.801301   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:48.146975   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:48.147834   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:48.147941   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:50.952178   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:50.952719   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:50.952719   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:53.220018   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:53.220281   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:53.220384   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:55.971389   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:55.972214   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:55.972214   13512 provision.go:143] copyHostCerts
	I0328 00:10:55.972436   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:10:55.981827   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:10:55.981827   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:10:55.982587   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:10:55.983941   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:10:55.992781   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:10:55.992781   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:10:55.993320   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:10:55.994217   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:10:56.002427   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:10:56.002427   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:10:56.003436   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:10:56.004418   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000-m03 san=[127.0.0.1 172.28.227.17 ha-170000-m03 localhost minikube]
	I0328 00:10:56.128965   13512 provision.go:177] copyRemoteCerts
	I0328 00:10:56.143435   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:10:56.143435   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:58.412367   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:58.412437   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:58.412619   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:01.224447   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:01.224447   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:01.225355   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:01.327995   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1844144s)
	I0328 00:11:01.327995   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:11:01.328346   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:11:01.382551   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:11:01.383168   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:11:01.434334   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:11:01.434874   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 00:11:01.510845   13512 provision.go:87] duration metric: took 15.7094462s to configureAuth
	I0328 00:11:01.510845   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:11:01.527005   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:11:01.527158   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:03.956380   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:03.956449   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:03.956606   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:06.729489   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:06.729489   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:06.736658   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:06.737324   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:06.737324   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:11:06.859725   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:11:06.859725   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:11:06.859725   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:11:06.859725   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:09.162077   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:09.163077   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:09.163143   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:11.938069   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:11.938069   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:11.945302   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:11.945476   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:11.945476   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.239.31"
	Environment="NO_PROXY=172.28.239.31,172.28.224.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:11:12.113989   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.239.31
	Environment=NO_PROXY=172.28.239.31,172.28.224.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:11:12.114099   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:14.400400   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:14.400574   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:14.400574   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:17.150210   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:17.150391   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:17.156150   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:17.156908   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:17.156908   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:11:19.425896   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:11:19.425896   13512 machine.go:97] duration metric: took 49.4017594s to provisionDockerMachine
	I0328 00:11:19.425896   13512 client.go:171] duration metric: took 2m4.9623144s to LocalClient.Create
	I0328 00:11:19.425896   13512 start.go:167] duration metric: took 2m4.9623144s to libmachine.API.Create "ha-170000"
	I0328 00:11:19.425896   13512 start.go:293] postStartSetup for "ha-170000-m03" (driver="hyperv")
	I0328 00:11:19.425896   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:11:19.439712   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:11:19.439712   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:24.457643   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:24.457643   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:24.462510   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:24.566324   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1265798s)
	I0328 00:11:24.579256   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:11:24.587473   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:11:24.587473   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:11:24.588182   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:11:24.589090   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:11:24.589167   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:11:24.604039   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:11:24.623668   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:11:24.676946   13512 start.go:296] duration metric: took 5.2510174s for postStartSetup
	I0328 00:11:24.679915   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:26.967162   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:26.967162   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:26.967373   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:29.695411   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:29.695411   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:29.695411   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:11:29.698252   13512 start.go:128] duration metric: took 2m15.237623s to createHost
	I0328 00:11:29.698252   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:31.981641   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:31.981930   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:31.981930   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:34.699491   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:34.700390   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:34.706044   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:34.706799   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:34.706799   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:11:34.833256   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584694.842257460
	
	I0328 00:11:34.833369   13512 fix.go:216] guest clock: 1711584694.842257460
	I0328 00:11:34.833369   13512 fix.go:229] Guest: 2024-03-28 00:11:34.84225746 +0000 UTC Remote: 2024-03-28 00:11:29.6982526 +0000 UTC m=+605.454643701 (delta=5.14400486s)
	I0328 00:11:34.833511   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:37.106711   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:37.106711   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:37.107728   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:39.861297   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:39.861297   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:39.867067   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:39.867221   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:39.867221   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584694
	I0328 00:11:40.017756   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:11:34 UTC 2024
	
	I0328 00:11:40.017756   13512 fix.go:236] clock set: Thu Mar 28 00:11:34 UTC 2024
	 (err=<nil>)
	I0328 00:11:40.017756   13512 start.go:83] releasing machines lock for "ha-170000-m03", held for 2m25.5580492s
	I0328 00:11:40.017982   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:42.307215   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:42.307857   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:42.307857   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:45.088912   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:45.088912   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:45.095105   13512 out.go:177] * Found network options:
	I0328 00:11:45.097506   13512 out.go:177]   - NO_PROXY=172.28.239.31,172.28.224.3
	W0328 00:11:45.099273   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.099273   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:11:45.101175   13512 out.go:177]   - NO_PROXY=172.28.239.31,172.28.224.3
	W0328 00:11:45.104173   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104173   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104471   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104471   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:11:45.107490   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:11:45.107490   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:45.117580   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:11:45.117580   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:47.491755   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:47.491834   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:47.491892   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:50.362432   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:50.362507   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:50.362507   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:50.391889   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:50.391970   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:50.392565   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:50.559763   13512 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4420443s)
	W0328 00:11:50.559763   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:11:50.559763   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4521343s)
	I0328 00:11:50.573439   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:11:50.606513   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:11:50.606513   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:11:50.606513   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:11:50.658201   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:11:50.694246   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:11:50.717044   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:11:50.729576   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:11:50.763493   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:11:50.797091   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:11:50.832209   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:11:50.868567   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:11:50.905268   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:11:50.940944   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:11:50.976374   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:11:51.010344   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:11:51.046659   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:11:51.081592   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:51.292532   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:11:51.327380   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:11:51.342874   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:11:51.384668   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:11:51.423634   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:11:51.478177   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:11:51.517605   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:11:51.560271   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:11:51.627862   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:11:51.656380   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:11:51.709153   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:11:51.728844   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:11:51.747791   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:11:51.795547   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:11:52.020194   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:11:52.244134   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:11:52.244253   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:11:52.293618   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:52.521802   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:11:55.154311   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6324928s)
	I0328 00:11:55.170396   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:11:55.212045   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:11:55.249933   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:11:55.467978   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:11:55.692741   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:55.917272   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:11:55.970058   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:11:56.011352   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:56.242569   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:11:56.356668   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:11:56.372747   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:11:56.382175   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:11:56.396011   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:11:56.415458   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:11:56.499931   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:11:56.509980   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:11:56.556128   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:11:56.592284   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:11:56.595794   13512 out.go:177]   - env NO_PROXY=172.28.239.31
	I0328 00:11:56.598814   13512 out.go:177]   - env NO_PROXY=172.28.239.31,172.28.224.3
	I0328 00:11:56.600724   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:11:56.607642   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:11:56.607642   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:11:56.619667   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:11:56.626331   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:11:56.650427   13512 mustload.go:65] Loading cluster: ha-170000
	I0328 00:11:56.650736   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:11:56.661799   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:11:58.911445   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:58.911445   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:58.911445   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:11:58.912164   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.227.17
	I0328 00:11:58.912164   13512 certs.go:194] generating shared ca certs ...
	I0328 00:11:58.912164   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:58.930691   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:11:58.943675   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:11:58.943675   13512 certs.go:256] generating profile certs ...
	I0328 00:11:58.944679   13512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:11:58.944679   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47
	I0328 00:11:58.944679   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.224.3 172.28.227.17 172.28.239.254]
	I0328 00:11:59.094505   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 ...
	I0328 00:11:59.094505   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47: {Name:mk775257f382591a7ec7000c86c060a0540ed0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:59.095850   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47 ...
	I0328 00:11:59.095850   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47: {Name:mk86d5c3ddc5fb09aa811e85a0cb8b7d8a26f6d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:59.096193   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:11:59.108169   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:11:59.123919   13512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:11:59.123919   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:11:59.124189   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:11:59.124472   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:11:59.124817   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:11:59.125077   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:11:59.125332   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:11:59.125467   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:11:59.125635   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:11:59.126270   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:11:59.128252   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:11:59.128463   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:11:59.128818   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:11:59.129186   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:11:59.129524   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:11:59.130273   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:11:59.130533   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:12:01.430637   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:12:01.430637   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:01.430904   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:12:04.202047   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:12:04.202047   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:04.202931   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:12:04.309542   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0328 00:12:04.318209   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0328 00:12:04.359642   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0328 00:12:04.367426   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0328 00:12:04.405277   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0328 00:12:04.412071   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0328 00:12:04.445163   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0328 00:12:04.452995   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0328 00:12:04.494261   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0328 00:12:04.503478   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0328 00:12:04.541691   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0328 00:12:04.548643   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0328 00:12:04.572314   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:12:04.625706   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:12:04.677441   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:12:04.729212   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:12:04.777433   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0328 00:12:04.829262   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:12:04.880597   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:12:04.932278   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:12:04.985135   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:12:05.035832   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:12:05.084191   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:12:05.136133   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0328 00:12:05.170889   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0328 00:12:05.204980   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0328 00:12:05.237575   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0328 00:12:05.272293   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0328 00:12:05.307298   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0328 00:12:05.341734   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0328 00:12:05.389371   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:12:05.412157   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:12:05.445342   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.453654   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.467295   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.495145   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:12:05.531582   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:12:05.567682   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.578846   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.595060   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.622555   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:12:05.660818   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:12:05.698921   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.706748   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.725025   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.749169   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:12:05.788934   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:12:05.796821   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:12:05.796821   13512 kubeadm.go:928] updating node {m03 172.28.227.17 8443 v1.29.3 docker true true} ...
	I0328 00:12:05.797356   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.227.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:12:05.797356   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:12:05.809099   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:12:05.837891   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:12:05.838151   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:12:05.854097   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:12:05.875439   13512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0328 00:12:05.887961   13512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0328 00:12:05.912785   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:12:05.912785   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:12:05.926881   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:12:05.942065   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:12:05.943093   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:12:05.957621   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:12:05.957621   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 00:12:05.957698   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 00:12:05.957698   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0328 00:12:05.957698   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0328 00:12:05.988206   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:12:06.060728   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 00:12:06.060728   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0328 00:12:07.469413   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0328 00:12:07.490772   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0328 00:12:07.528169   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:12:07.571377   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:12:07.627176   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:12:07.635302   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:12:07.676140   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:12:07.914597   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:12:07.950295   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:12:07.972451   13512 start.go:316] joinCluster: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:12:07.972979   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0328 00:12:07.973377   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:12:10.230652   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:12:10.231380   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:10.231488   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:12:13.002440   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:12:13.002546   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:13.003041   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:12:13.234374   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2613624s)
	I0328 00:12:13.234638   13512 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:12:13.234702   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnq6yu.t9p6crqq0gi1ikxs --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m03 --control-plane --apiserver-advertise-address=172.28.227.17 --apiserver-bind-port=8443"
	I0328 00:13:06.667336   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnq6yu.t9p6crqq0gi1ikxs --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m03 --control-plane --apiserver-advertise-address=172.28.227.17 --apiserver-bind-port=8443": (53.4323027s)
	I0328 00:13:06.667336   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0328 00:13:07.475438   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000-m03 minikube.k8s.io/updated_at=2024_03_28T00_13_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=false
	I0328 00:13:07.701935   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0328 00:13:07.887530   13512 start.go:318] duration metric: took 59.914708s to joinCluster
	I0328 00:13:07.887530   13512 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:13:07.892192   13512 out.go:177] * Verifying Kubernetes components...
	I0328 00:13:07.888661   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:13:07.906635   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:13:08.302425   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:13:08.355675   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:13:08.356497   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0328 00:13:08.356497   13512 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.239.31:8443
	I0328 00:13:08.357766   13512 node_ready.go:35] waiting up to 6m0s for node "ha-170000-m03" to be "Ready" ...
	I0328 00:13:08.358047   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:08.358070   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:08.358070   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:08.358120   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:08.372644   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:13:08.862152   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:08.862152   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:08.862152   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:08.862152   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:08.866785   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:09.364540   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:09.364540   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:09.364540   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:09.364540   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:09.370302   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:09.865902   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:09.866124   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:09.866124   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:09.866124   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:09.871658   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:10.370197   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:10.370197   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:10.370197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:10.370197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:10.374804   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:10.376218   13512 node_ready.go:53] node "ha-170000-m03" has status "Ready":"False"
	I0328 00:13:10.860251   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:10.860334   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:10.860334   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:10.860334   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:10.867321   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.359448   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:11.359448   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.359448   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.359448   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.365925   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.865242   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:11.865296   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.865296   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.865296   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.870902   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.872628   13512 node_ready.go:49] node "ha-170000-m03" has status "Ready":"True"
	I0328 00:13:11.872720   13512 node_ready.go:38] duration metric: took 3.5148105s for node "ha-170000-m03" to be "Ready" ...
	I0328 00:13:11.872720   13512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:13:11.872900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:13:11.872961   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.872961   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.873013   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.901740   13512 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0328 00:13:11.912352   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.912352   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-5npq4
	I0328 00:13:11.912352   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.912352   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.912352   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.918426   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.919954   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.919954   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.920015   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.920015   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.924524   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:11.925756   13512 pod_ready.go:92] pod "coredns-76f75df574-5npq4" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.925828   13512 pod_ready.go:81] duration metric: took 13.4762ms for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.925828   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.925934   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-mgrhj
	I0328 00:13:11.926013   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.926013   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.926062   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.929665   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:11.931112   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.931112   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.931112   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.931112   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.936358   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.937895   13512 pod_ready.go:92] pod "coredns-76f75df574-mgrhj" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.937895   13512 pod_ready.go:81] duration metric: took 12.0675ms for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.937895   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.937895   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000
	I0328 00:13:11.937895   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.937895   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.937895   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.943899   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.945289   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.945289   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.945289   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.945383   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.950241   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:11.951234   13512 pod_ready.go:92] pod "etcd-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.951234   13512 pod_ready.go:81] duration metric: took 13.3385ms for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.951234   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.951234   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:13:11.951234   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.951234   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.951234   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.958263   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:11.959402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:11.959402   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.959402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.959402   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.965047   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.965928   13512 pod_ready.go:92] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.965928   13512 pod_ready.go:81] duration metric: took 14.6943ms for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.966034   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.068614   13512 request.go:629] Waited for 102.2008ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m03
	I0328 00:13:12.068614   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m03
	I0328 00:13:12.068614   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.068614   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.068614   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.073330   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:12.273672   13512 request.go:629] Waited for 197.9513ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:12.273854   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:12.274057   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.274057   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.274057   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.281351   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:12.283049   13512 pod_ready.go:92] pod "etcd-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:12.283099   13512 pod_ready.go:81] duration metric: took 317.0393ms for pod "etcd-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.283099   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.476566   13512 request.go:629] Waited for 193.4662ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:13:12.477043   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:13:12.477043   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.477101   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.477101   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.481450   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:12.667022   13512 request.go:629] Waited for 183.8337ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:12.667022   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:12.667022   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.667022   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.667022   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.673026   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:12.674023   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:12.674023   13512 pod_ready.go:81] duration metric: took 390.9219ms for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.674023   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.872220   13512 request.go:629] Waited for 198.1526ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:13:12.872527   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:13:12.872527   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.872598   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.872598   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.881609   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.077518   13512 request.go:629] Waited for 194.4658ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:13.077518   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:13.077518   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.077518   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.077518   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.082598   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:13.084080   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:13.084141   13512 pod_ready.go:81] duration metric: took 410.1159ms for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:13.084185   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:13.267360   13512 request.go:629] Waited for 182.6691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.267497   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.267497   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.267497   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.267497   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.277120   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.473447   13512 request.go:629] Waited for 195.1512ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.473569   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.473569   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.473569   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.473640   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.483204   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.678963   13512 request.go:629] Waited for 92.9148ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.678963   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.678963   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.678963   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.679242   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.684756   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:13.866675   13512 request.go:629] Waited for 180.0707ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.866840   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.866869   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.866869   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.866869   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.871358   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:14.085938   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:14.085938   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.085938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.085938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.093499   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:14.271953   13512 request.go:629] Waited for 177.553ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.272166   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.272292   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.272292   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.272292   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.277077   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:14.598425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:14.598425   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.598425   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.598425   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.604991   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:14.678420   13512 request.go:629] Waited for 71.0218ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.678520   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.678520   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.678590   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.678590   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.684096   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:15.099433   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:15.099433   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.099554   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.099554   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.105494   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:15.107402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:15.107402   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.107402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.107487   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.111535   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:15.112678   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:15.595529   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:15.595606   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.595663   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.595663   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.599907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:15.602060   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:15.602115   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.602115   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.602141   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.608680   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:16.096291   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:16.096466   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.096466   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.096466   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.110051   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:13:16.111607   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:16.111607   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.111689   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.111689   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.116493   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:16.585627   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:16.585753   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.585753   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.585753   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.592878   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:16.594390   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:16.594447   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.594447   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.594447   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.598583   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.090469   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:17.090558   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.090622   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.090622   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.096234   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.097482   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:17.097536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.097536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.097591   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.101869   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.589866   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:17.589866   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.589866   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.589866   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.596067   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:17.598164   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:17.598164   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.598164   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.598164   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.605469   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:17.606285   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:18.093075   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:18.093075   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.093075   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.093075   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.102111   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:18.103351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:18.103407   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.103407   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.103478   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.108775   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:18.590854   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:18.590854   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.590854   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.590854   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.596713   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:18.598068   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:18.598068   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.598128   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.598128   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.602366   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:19.090274   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:19.090482   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.090482   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.090482   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.096914   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:19.098571   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:19.098644   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.098644   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.098644   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.102970   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:19.592433   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:19.592627   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.592627   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.592627   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.598705   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:19.599994   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:19.599994   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.599994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.599994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.604626   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:20.095379   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:20.095379   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.095379   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.095379   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.103977   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:20.105321   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:20.105321   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.105321   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.105321   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.109387   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:20.110381   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:20.599457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:20.599457   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.599457   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.599457   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.606916   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:20.607783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:20.607783   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.607783   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.607783   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.616105   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:21.086670   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:21.086743   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.086743   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.086743   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.092076   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:21.093883   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:21.093883   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.093883   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.093883   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.098668   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:21.588722   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:21.589164   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.589164   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.589164   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.594272   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:21.596383   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:21.596383   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.596383   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.596383   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.602780   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:22.093160   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:22.093265   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.093265   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.093265   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.099065   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:22.100084   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:22.100203   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.100203   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.100203   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.104891   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:22.593403   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:22.593403   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.593403   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.593403   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.599254   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:22.600050   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:22.600620   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.600620   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.600620   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.605973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:22.606730   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:23.093609   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:23.093609   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.093712   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.093712   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.099076   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:23.100708   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:23.100767   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.100767   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.100767   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.104303   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:23.594477   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:23.594477   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.594477   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.594477   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.600550   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:23.602660   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:23.602660   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.602803   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.602803   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.607853   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:24.094653   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:24.094653   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.094653   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.094653   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.103835   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:24.105603   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:24.105603   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.105603   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.105670   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.109936   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:24.585549   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:24.585549   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.585549   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.585549   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.592085   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:24.592085   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:24.592085   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.592085   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.592085   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.598735   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:25.089211   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:25.089211   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.089300   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.089300   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.095980   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:25.096861   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:25.096861   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.096861   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.096861   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.101682   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:25.102167   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:25.591982   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:25.592207   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.592207   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.592207   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.601020   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:25.602327   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:25.602365   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.602365   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.602430   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.606993   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:26.092924   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:26.092924   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.093204   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.093204   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.099632   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:26.101100   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:26.101100   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.101100   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.101100   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.105607   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:26.593455   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:26.593517   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.593517   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.593517   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.599233   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:26.601240   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:26.601240   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.601240   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.601240   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.605766   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.093999   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:27.094253   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.094253   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.094253   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.102209   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:27.104044   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:27.104044   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.104118   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.104118   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.108873   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.109647   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:27.594342   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:27.594342   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.594342   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.594342   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.599205   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.600271   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:27.600355   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.600355   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.600355   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.608801   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:28.096343   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:28.096343   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.096343   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.096343   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.100969   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.102671   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:28.102671   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.102671   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.102671   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.107267   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.597167   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:28.597167   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.597167   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.597410   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.601668   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.603437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:28.603437   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.603437   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.603437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.610973   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:29.085841   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:29.085921   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.085921   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.085921   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.091554   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:29.093151   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:29.093151   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.093151   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.093151   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.101731   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:29.589285   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:29.589285   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.589285   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.589285   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.596531   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:29.598487   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:29.598487   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.598487   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.598487   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.604093   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:29.605165   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:30.091759   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:30.091826   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.091826   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.091826   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.113928   13512 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0328 00:13:30.115225   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:30.115225   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.115225   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.115225   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.119654   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:30.592722   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:30.592722   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.592722   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.592722   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.599749   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:30.600737   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:30.600737   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.600737   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.600737   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.605831   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:31.098612   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:31.098612   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.098612   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.098612   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.103082   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:31.104480   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:31.104536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.104536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.104536   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.110459   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:31.597330   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:31.597330   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.597330   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.597330   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.603939   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:31.605356   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:31.605356   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.605356   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.605462   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.612948   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:31.612948   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:32.097666   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:32.097666   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.097666   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.097666   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.115920   13512 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0328 00:13:32.116874   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:32.116874   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.116874   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.116874   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.124488   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:32.596817   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:32.596817   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.596817   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.596817   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.602552   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:32.604431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:32.604431   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.604431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.604431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.609050   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:33.099353   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:33.099432   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.099432   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.099496   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.105748   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:33.107495   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:33.107495   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.107495   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.107495   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.112356   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:33.586766   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:33.587041   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.587041   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.587041   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.593820   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:33.595486   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:33.595536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.595536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.595581   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.599877   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:34.089974   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:34.089974   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.090040   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.090040   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.095711   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:34.098516   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:34.098571   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.098571   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.098571   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.101769   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:34.103846   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:34.592230   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:34.592230   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.592230   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.592522   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.598583   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:34.601026   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:34.601026   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.601026   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.601026   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.606533   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:35.089953   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:35.090199   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.090199   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.090199   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.099321   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:35.103091   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:35.103091   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.103091   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.103091   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.115395   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:13:35.585510   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:35.585592   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.585592   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.585592   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.592022   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:35.593438   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:35.593543   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.593543   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.593543   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.598430   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:36.089153   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:36.089153   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.089243   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.089243   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.095520   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:36.097195   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:36.097195   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.097195   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.097195   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.102701   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:36.589901   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:36.589978   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.589978   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.589978   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.598429   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:36.599372   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:36.599372   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.599372   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.599372   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.605227   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:36.605884   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:37.091158   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:37.091277   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.091277   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.091277   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.096719   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:37.098492   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:37.098492   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.098492   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.098556   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.104530   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:37.589655   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:37.589655   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.589655   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.589655   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.597876   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:37.599792   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:37.599896   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.599896   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.599978   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.606753   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:38.087351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:38.087351   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.087351   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.087351   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.093869   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:38.095305   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:38.095381   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.095381   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.095381   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.099205   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:38.590543   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:38.590543   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.590543   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.590543   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.596434   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:38.598503   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:38.598580   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.598580   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.598580   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.603808   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:39.091435   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:39.091435   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.091636   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.091636   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.097959   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:39.099580   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:39.099580   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.099580   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.099580   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.107647   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:39.108642   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:39.592076   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:39.592421   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.592421   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.592421   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.597686   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:39.599328   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:39.599494   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.599494   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.599494   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.607241   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:40.092724   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:40.092724   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.092724   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.092724   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.099137   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:40.100308   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:40.100308   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.100308   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.100308   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.107346   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:40.591297   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:40.591297   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.591492   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.591492   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.595887   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:40.598010   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:40.598010   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.598010   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.598010   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.602243   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:41.095829   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:41.096166   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.096166   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.096166   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.103258   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:41.104425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:41.104482   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.104482   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.104482   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.108659   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:41.109987   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:41.596182   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:41.596182   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.596369   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.596369   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.602785   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:41.604457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:41.604590   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.604590   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.604590   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.609440   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:42.098415   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:42.098530   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.098530   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.098530   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.102991   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:42.104551   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:42.104551   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.104611   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.104611   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.108428   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:42.597328   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:42.597328   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.597328   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.597328   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.603340   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:42.604367   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:42.604367   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.604367   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.604367   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.608710   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.100254   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:43.100254   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.100254   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.100254   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.106665   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:43.108085   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:43.108085   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.108158   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.108158   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.113114   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.113967   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:43.585631   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:43.585791   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.585876   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.585876   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.590397   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.591604   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:43.591604   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.591604   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.591604   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.595215   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:44.085621   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:44.085680   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.085680   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.085680   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.090000   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:44.090998   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:44.090998   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.090998   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.090998   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.095084   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:44.589132   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:44.589374   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.589374   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.589374   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.594889   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:44.595959   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:44.595959   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.596018   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.596018   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.604495   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:45.089720   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:45.089720   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.089720   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.089720   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.095312   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:45.096849   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:45.096849   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.096849   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.096849   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.101221   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:45.593132   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:45.593132   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.593132   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.593132   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.599705   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:45.599705   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:45.599705   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.599705   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.599705   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.608870   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:45.609825   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:46.099774   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:46.099887   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.099887   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.099887   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.106307   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:46.107645   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:46.107645   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.107645   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.107645   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.115071   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:46.589304   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:46.589304   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.589304   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.589304   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.599380   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:13:46.602460   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:46.602460   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.602460   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.602460   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.611324   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:47.084809   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:47.084886   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.084931   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.084931   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.089927   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:47.091312   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:47.091312   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.091312   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.091312   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.095907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:47.592528   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:47.592528   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.592528   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.592528   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.598175   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:47.600012   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:47.600012   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.600012   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.600012   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.605113   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:48.094213   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:48.094213   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.094213   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.094213   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.099931   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:48.100987   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:48.101062   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.101062   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.101062   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.105263   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:48.106717   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:48.594129   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:48.594216   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.594216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.594216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.600857   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:48.601963   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:48.601963   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.601963   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.601963   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.605663   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:49.094917   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:49.095103   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.095103   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.095103   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.103278   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:49.104457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:49.104457   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.104457   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.104457   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.109327   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:49.594648   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:49.594771   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.594771   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.594771   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.600116   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:49.601637   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:49.601637   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.601637   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.601637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.606910   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:50.096535   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:50.096535   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.096535   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.096535   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.102017   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:50.103500   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:50.103587   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.103587   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.103587   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.107915   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:50.108455   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:50.596075   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:50.596075   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.596367   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.596367   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.601844   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:50.602775   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:50.602849   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.602849   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.602849   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.607727   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:51.092567   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:51.092567   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.092567   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.092567   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.097986   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:51.098604   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:51.098604   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.098604   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.098604   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.103235   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:51.592952   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:51.593007   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.593007   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.593007   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.598607   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:51.600299   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:51.600299   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.600299   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.600299   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.604078   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:52.095221   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:52.095221   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.095221   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.095437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.102465   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:52.103778   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:52.103986   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.103986   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.103986   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.108763   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:52.109628   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:52.594144   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:52.594144   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.594144   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.594346   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.598597   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:52.600543   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:52.600640   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.600640   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.600640   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.604958   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.094339   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:53.094339   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.094339   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.094339   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.099296   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.100411   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:53.100411   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.100411   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.100411   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.105239   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.597958   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:53.597958   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.597958   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.597958   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.606300   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:53.607688   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:53.607997   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.607997   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.607997   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.611290   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:54.099720   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:54.099812   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.099812   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.099812   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.106078   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:54.107358   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:54.107358   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.107358   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.107358   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.112576   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:54.113825   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:54.598711   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:54.598711   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.598711   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.598711   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.607687   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:54.608828   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:54.608828   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.608828   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.608828   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.612675   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:55.097959   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:55.098091   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.098091   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.098091   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.103151   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:55.105320   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:55.105320   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.105320   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.105320   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.113724   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:55.598832   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:55.598832   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.598832   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.598832   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.604545   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:55.606119   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:55.606119   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.606119   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.606119   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.610650   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:56.087083   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:56.087083   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.087083   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.087083   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.094684   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:56.096524   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:56.096598   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.096598   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.096598   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.101685   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:56.589938   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:56.589938   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.589938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.589938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.596364   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:56.598218   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:56.598218   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.598379   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.598379   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.603279   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:56.604554   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:57.091611   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:57.091611   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.091611   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.091611   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.097965   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:57.099393   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:57.099393   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.099524   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.099524   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.105147   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:57.592480   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:57.592564   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.592623   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.592623   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.598660   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:57.599263   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:57.599263   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.599263   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.599263   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.603925   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:58.095478   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:58.095478   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.095478   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.095478   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.100942   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:58.102763   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:58.102763   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.102763   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.102763   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.111483   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:58.594512   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:58.594785   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.594785   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.594785   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.598981   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:58.600499   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:58.600563   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.600563   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.600563   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.617171   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:13:58.618360   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:59.096697   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:59.096697   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.096697   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.096697   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.102942   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:59.105802   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:59.105868   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.105868   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.105868   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.110525   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:59.594699   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:59.594699   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.594699   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.594699   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.600380   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:59.602585   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:59.602585   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.602585   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.602651   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.607146   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:00.092956   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:00.093196   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.093196   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.093299   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.099408   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:00.100644   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:00.100715   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.100715   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.100715   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.104970   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:00.591437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:00.591437   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.591437   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.591437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.598204   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:00.599163   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:00.599223   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.599223   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.599223   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.603986   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:01.093921   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:01.093921   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.093921   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.093921   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.103422   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:01.104472   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:01.104534   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.104534   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.104534   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.108873   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:01.108873   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:01.585847   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:01.585847   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.585847   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.585847   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.593503   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:01.596592   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:01.596592   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.596592   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.596592   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.601210   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:02.094133   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:02.094188   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.094188   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.094188   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.099808   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:02.100997   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:02.101058   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.101058   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.101058   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.104872   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:02.587733   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:02.587853   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.587853   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.587853   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.596204   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:02.597878   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:02.597878   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.597938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.597938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.604889   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:03.091431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:03.091431   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.091431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.091431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.097083   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:03.099201   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:03.099257   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.099257   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.099257   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.103589   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:03.585946   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:03.585946   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.585946   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.585946   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.590548   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:03.592564   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:03.592646   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.592646   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.592646   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.597916   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:03.597916   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:04.091347   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:04.091347   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.091347   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.091347   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.097016   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:04.098804   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:04.098850   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.098850   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.098850   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.105001   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:04.593783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:04.593919   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.593919   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.593969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.599357   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:04.600435   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:04.600508   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.600508   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.600562   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.604791   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:05.087461   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:05.087461   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.087541   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.087541   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.100612   13512 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0328 00:14:05.102476   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:05.102476   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.102476   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.102476   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.111616   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:05.586138   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:05.586448   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.586448   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.586448   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.593703   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:05.595108   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:05.595108   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.595216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.595216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.599503   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:05.600818   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:06.090869   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:06.091173   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.091208   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.091208   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.098891   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:06.100088   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:06.100088   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.100088   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.100088   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.108590   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:06.594644   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:06.594644   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.594644   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.594644   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.603045   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:06.603979   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:06.603979   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.603979   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.603979   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.608612   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:07.096661   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:07.096661   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.096661   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.096661   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.103314   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:07.105052   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:07.105052   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.105052   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.105052   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.110165   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:07.595727   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:07.595789   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.595789   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.595848   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.601333   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:07.603421   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:07.603421   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.603421   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.603421   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.606713   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:07.608329   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:08.097410   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:08.097410   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.097410   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.097410   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.103066   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:08.104623   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:08.104623   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.104623   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.104623   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.109238   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:08.598431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:08.598431   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.598431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.598431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.605500   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:08.607284   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:08.607284   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.607284   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.607284   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.612125   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:09.088064   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:09.088064   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.088197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.088197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.095686   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:09.096689   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:09.096689   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.096689   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.096689   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.103673   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:09.597224   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:09.597224   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.597224   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.597224   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.603831   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:09.605771   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:09.605771   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.605771   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.605771   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.611065   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:09.611974   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:10.085546   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:10.085760   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.085836   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.085836   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.091620   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:10.093251   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:10.093309   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.093309   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.093309   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.097014   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:10.591317   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:10.591388   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.591388   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.591388   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.596237   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:10.597599   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:10.597599   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.597599   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.597599   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.602070   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:11.099422   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:11.099485   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.099485   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.099485   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.110387   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:14:11.111393   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:11.111393   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.111393   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.111393   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.115380   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:11.589210   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:11.589844   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.589844   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.589844   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.596277   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:11.597507   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:11.597564   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.597594   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.597594   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.603040   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:12.089506   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:12.089506   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.089995   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.089995   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.096321   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:12.098086   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:12.098172   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.098172   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.098172   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.110419   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:12.111291   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:12.591998   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:12.591998   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.591998   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.591998   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.597393   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:12.598765   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:12.598946   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.598946   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.598946   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.604091   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:13.092828   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:13.092828   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.092925   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.092925   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.097390   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:13.098401   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:13.098401   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.098495   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.098495   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.102207   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:13.591425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:13.591425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.591514   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.591514   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.597387   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:13.598913   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:13.598913   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.598977   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.598977   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.603750   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:14.096012   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:14.096012   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.096012   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.096012   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.103668   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:14.104896   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:14.104896   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.104896   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.104896   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.110150   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:14.585714   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:14.585714   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.585714   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.585714   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.597939   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:14.599229   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:14.599229   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.599229   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.599229   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.604815   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:14.604815   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:15.087288   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:15.087288   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.087355   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.087355   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.092243   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:15.093453   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:15.093513   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.093513   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.093513   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.097437   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:15.586287   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:15.586520   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.586520   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.586520   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.592678   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:15.593688   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:15.593688   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.593688   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.593688   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.598201   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:16.087111   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:16.087198   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.087273   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.087273   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.092695   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.093847   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:16.093847   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.093847   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.093847   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.099072   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.592426   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:16.592652   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.592652   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.592652   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.597973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.599259   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:16.599259   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.599259   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.599259   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.604212   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:16.605163   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:17.096990   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:17.096990   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.096990   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.096990   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.102726   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:17.104170   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:17.104232   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.104232   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.104232   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.114715   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:14:17.600701   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:17.600701   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.600701   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.600701   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.610583   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:17.612915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:17.612968   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.612968   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.612968   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.621379   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:18.087119   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:18.087119   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.087119   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.087119   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.093045   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.094660   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.094721   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.094721   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.094721   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.099551   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.101385   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.101450   13512 pod_ready.go:81] duration metric: took 1m5.0168595s for pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.101537   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.101685   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:14:18.101685   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.101685   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.101741   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.107186   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.109056   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:18.109056   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.109056   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.109114   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.124848   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:14:18.125429   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.125429   13512 pod_ready.go:81] duration metric: took 23.892ms for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.125429   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.125602   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:14:18.125602   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.125602   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.125602   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.130481   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.132385   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:18.132385   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.132385   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.132385   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.149200   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:14:18.150809   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.150809   13512 pod_ready.go:81] duration metric: took 25.3802ms for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.150809   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.151045   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:18.151045   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.151045   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.151045   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.155587   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.157527   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.157527   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.157527   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.157637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.162819   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.651494   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:18.651494   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.651726   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.651726   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.657690   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.658897   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.658897   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.658897   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.658897   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.664776   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:19.152216   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:19.152216   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.152216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.152216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.159625   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:19.161117   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:19.161117   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.161117   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.161117   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.172572   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:14:19.653957   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:19.653957   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.653957   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.653957   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.659549   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:19.660770   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:19.660770   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.660770   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.660770   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.666360   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:20.155923   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:20.155923   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.156030   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.156030   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.165101   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:20.166110   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:20.166110   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.166110   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.166110   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.171480   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:20.172298   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:20.661425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:20.661425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.661425   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.661425   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.667470   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:20.669420   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:20.669503   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.669503   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.669503   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.674216   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:21.161321   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:21.161321   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.161321   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.161426   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.167223   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:21.168466   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:21.168466   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.168466   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.168466   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.194393   13512 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0328 00:14:21.651478   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:21.651672   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.651672   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.651737   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.657470   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:21.658852   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:21.658852   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.658852   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.658852   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.663659   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.153264   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:22.153264   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.153346   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.153346   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.159322   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:22.161080   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:22.161136   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.161136   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.161136   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.165925   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.654843   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:22.654923   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.654923   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.654923   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.661477   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:22.662412   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:22.662490   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.662490   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.662490   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.667213   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.668252   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:23.155756   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:23.155841   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.155841   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.155841   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.164182   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:23.164900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:23.164900   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.164900   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.164900   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.169495   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:23.658760   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:23.658760   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.658760   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.658760   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.663275   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:23.665264   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:23.665306   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.665306   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.665306   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.669580   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:24.159856   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:24.159856   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.159856   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.159856   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.165170   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:24.166786   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:24.166786   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.166786   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.166786   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.170257   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:24.661776   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:24.661844   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.661844   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.661844   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.667879   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:24.669590   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:24.669780   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.669780   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.669780   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.677170   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:24.678096   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:25.164310   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:25.164310   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.164310   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.164310   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.169937   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:25.170731   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:25.170731   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.170731   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.170731   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.185348   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:14:25.665314   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:25.665388   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.665388   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.665388   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.671005   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:25.672282   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:25.672282   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.672282   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.672282   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.676722   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:26.157831   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:26.157920   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.157920   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.157920   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.164208   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:26.165174   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:26.165174   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.165174   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.165174   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.169654   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:26.657144   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:26.657144   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.657144   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.657144   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.663624   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:26.664510   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:26.664617   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.664617   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.664617   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.668954   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:27.159926   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:27.160056   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.160056   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.160056   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.166491   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:27.168532   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:27.168532   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.168577   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.168577   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.175543   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:27.176222   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:27.659965   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:27.660180   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.660180   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.660180   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.665328   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:27.667425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:27.667425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.667517   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.667517   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.675295   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:28.162890   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:28.163186   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.163291   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.163291   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.168275   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:28.169966   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:28.169966   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.170042   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.170042   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.176562   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:28.663637   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:28.663637   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.663637   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.663637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.667347   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:28.669385   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:28.669385   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.669385   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.669385   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.672986   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:29.155700   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:29.155700   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.155700   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.155700   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.160484   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:29.162309   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:29.162309   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.162309   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.162309   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.166907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:29.660956   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:29.660956   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.660956   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.660956   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.666189   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:29.668826   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:29.668826   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.669055   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.669055   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.674833   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:29.675581   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:30.166521   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:30.166756   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.166756   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.166756   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.174007   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:30.175080   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:30.175141   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.175141   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.175141   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.179609   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:30.654360   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:30.654422   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.654422   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.654422   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.659781   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:30.660581   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:30.660581   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.660581   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.660581   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.666355   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:31.157679   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:31.157745   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.157745   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.157745   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.167037   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:31.167651   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:31.167651   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.168194   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.168194   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.173337   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:31.663420   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:31.663420   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.663420   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.663420   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.670342   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:31.672214   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:31.672214   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.672214   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.672214   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.676583   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:31.677546   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:32.165160   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:32.165160   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.165160   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.165160   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.173573   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:32.175437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:32.175605   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.175699   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.175699   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.182495   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:32.652995   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:32.652995   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.652995   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.652995   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.659076   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:32.659947   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:32.659947   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.659947   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.659947   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.664306   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:33.159352   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:33.159649   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.159649   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.159649   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.165106   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:33.167183   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:33.167183   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.167183   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.167248   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.173131   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:33.659417   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:33.659491   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.659491   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.659491   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.665762   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:33.666948   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:33.667026   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.667026   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.667026   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.672263   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.161514   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:34.161514   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.161514   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.161514   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.167954   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:34.169125   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:34.169125   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.169125   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.169125   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.174426   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.175921   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:34.664641   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:34.664745   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.664745   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.664745   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.670710   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.672496   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:34.672496   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.672555   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.672555   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.677044   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:35.166721   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:35.166721   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.166800   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.166800   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.173158   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:35.174092   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:35.174166   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.174166   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.174166   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.178980   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:35.653943   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:35.653943   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.653943   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.653943   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.659636   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:35.661654   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:35.661739   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.661739   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.661739   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.667054   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:36.154900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:36.155267   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.155267   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.155267   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.160762   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:36.162512   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:36.162650   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.162650   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.162650   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.166778   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:36.655853   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:36.655853   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.655853   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.655853   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.662238   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:36.663533   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:36.663533   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.663533   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.663533   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.668121   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:36.668121   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:37.156431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:37.156506   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.156506   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.156506   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.161092   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:37.162614   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:37.162614   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.162614   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.162614   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.166846   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:37.657585   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:37.657665   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.657665   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.657759   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.664329   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:37.665027   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:37.665207   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.665207   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.665207   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.669811   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.158449   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:38.158681   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.158681   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.158681   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.164353   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:38.165703   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:38.165703   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.165703   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.165703   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.170321   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.661390   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:38.661460   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.661531   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.661531   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.667045   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:38.668344   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:38.668344   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.668417   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.668417   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.672667   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.674288   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:39.160857   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:39.160857   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.160857   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.160857   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.169717   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:39.170509   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:39.170509   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.170509   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.170509   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.175675   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:39.663994   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:39.663994   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.663994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.663994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.670488   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:39.671554   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:39.671554   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.671554   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.671657   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.676825   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:40.153243   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:40.153587   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.153587   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.153587   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.159923   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:40.161636   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:40.161636   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.161636   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.161636   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.166222   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:40.657367   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:40.657426   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.657426   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.657426   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.664028   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:40.664728   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:40.664728   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.664728   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.664728   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.677659   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:40.677659   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:41.162773   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:41.162983   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.162983   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.162983   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.169404   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:41.169404   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.169404   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.170391   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.170391   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.174392   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.176391   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.176391   13512 pod_ready.go:81] duration metric: took 23.0254363s for pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.176391   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29dwg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.176391   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29dwg
	I0328 00:14:41.176391   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.176391   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.176391   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.184856   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.185862   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.185862   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.185862   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.185862   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.190594   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.191219   13512 pod_ready.go:92] pod "kube-proxy-29dwg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.191287   13512 pod_ready.go:81] duration metric: took 14.8965ms for pod "kube-proxy-29dwg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.191287   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.191351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:14:41.191465   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.191465   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.191465   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.195173   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:41.196197   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:41.196197   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.196197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.196197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.201158   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.202236   13512 pod_ready.go:92] pod "kube-proxy-w2z74" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.202236   13512 pod_ready.go:81] duration metric: took 10.9482ms for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.202236   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.202236   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:14:41.202236   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.202236   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.202236   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.207209   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.209086   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.209178   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.209178   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.209178   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.213581   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.213581   13512 pod_ready.go:92] pod "kube-proxy-wrvmg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.213581   13512 pod_ready.go:81] duration metric: took 11.3456ms for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.213581   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.214601   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:14:41.214601   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.214601   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.214601   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.218663   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.219099   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:41.219099   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.219099   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.219099   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.222799   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:41.224023   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.224023   13512 pod_ready.go:81] duration metric: took 10.4414ms for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.224023   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.371126   13512 request.go:629] Waited for 146.7856ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:14:41.371182   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:14:41.371182   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.371182   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.371182   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.376824   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:41.574646   13512 request.go:629] Waited for 196.133ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.574646   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.574646   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.574646   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.574646   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.579949   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.581028   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.581028   13512 pod_ready.go:81] duration metric: took 357.0028ms for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.581028   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.777600   13512 request.go:629] Waited for 195.9745ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m03
	I0328 00:14:41.777663   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m03
	I0328 00:14:41.777722   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.777722   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.777722   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.786114   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.965251   13512 request.go:629] Waited for 177.6128ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.965461   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.965461   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.965461   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.965591   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.974823   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.975612   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.975612   13512 pod_ready.go:81] duration metric: took 394.3864ms for pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.975612   13512 pod_ready.go:38] duration metric: took 1m30.1022361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:14:41.975737   13512 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:14:41.987127   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:14:42.015825   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:14:42.026489   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:14:42.052951   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:14:42.064829   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:14:42.094836   13512 logs.go:276] 0 containers: []
	W0328 00:14:42.094935   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:14:42.104927   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:14:42.130106   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:14:42.139720   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:14:42.168581   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:14:42.178080   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:14:42.204210   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:14:42.214594   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:14:42.239935   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:14:42.239935   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:14:42.239935   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:14:42.293277   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:14:42.293277   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:14:42.330955   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:14:42.331068   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:14:42.405581   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:14:42.405581   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:14:42.440376   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:14:42.440376   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:14:42.490372   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:14:42.490372   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:14:42.582240   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:14:42.582540   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:14:42.637380   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:14:42.637380   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:14:42.704589   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:14:42.704589   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:14:42.819957   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:14:42.820029   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:42.916233   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:14:42.916233   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:14:43.482822   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:14:43.482822   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:14:43.519577   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:14:43.519577   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:14:43.560324   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:43.560324   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:14:43.560324   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:14:43.560324   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:43.560324   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:43.560876   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:43.560994   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:14:53.589366   13512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:14:53.621160   13512 api_server.go:72] duration metric: took 1m45.7329679s to wait for apiserver process to appear ...
	I0328 00:14:53.621233   13512 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:14:53.630167   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:14:53.657192   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:14:53.668105   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:14:53.697255   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:14:53.706793   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:14:53.733562   13512 logs.go:276] 0 containers: []
	W0328 00:14:53.733562   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:14:53.744194   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:14:53.780424   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:14:53.790235   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:14:53.817335   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:14:53.827888   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:14:53.862816   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:14:53.873285   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:14:53.903652   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:14:53.903652   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:14:53.904667   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:14:53.979101   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:14:53.979101   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:14:54.135110   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:14:54.135194   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:14:54.166983   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:14:54.167056   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:14:54.471357   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:14:54.471895   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:14:54.524447   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:14:54.524447   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:14:54.607843   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:14:54.607843   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:14:54.663538   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:14:54.663538   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:14:54.701283   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:14:54.701283   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:54.794917   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:14:54.795079   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:14:54.858201   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:14:54.858201   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:14:54.891191   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:14:54.891191   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:14:54.947186   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:14:54.947186   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:14:54.985352   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:54.985352   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:14:54.985352   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:54.985352   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:54.985352   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:15:05.003275   13512 api_server.go:253] Checking apiserver healthz at https://172.28.239.31:8443/healthz ...
	I0328 00:15:05.012958   13512 api_server.go:279] https://172.28.239.31:8443/healthz returned 200:
	ok
	I0328 00:15:05.013209   13512 round_trippers.go:463] GET https://172.28.239.31:8443/version
	I0328 00:15:05.013209   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:05.013209   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:05.013209   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:05.015807   13512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 00:15:05.015889   13512 api_server.go:141] control plane version: v1.29.3
	I0328 00:15:05.015889   13512 api_server.go:131] duration metric: took 11.3945847s to wait for apiserver health ...
	I0328 00:15:05.015889   13512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:15:05.027296   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:15:05.055587   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:15:05.065882   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:15:05.091905   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:15:05.102535   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:15:05.131226   13512 logs.go:276] 0 containers: []
	W0328 00:15:05.131313   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:15:05.143263   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:15:05.174284   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:15:05.186070   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:15:05.217359   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:15:05.228478   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:15:05.264775   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:15:05.277680   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:15:05.316910   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:15:05.316910   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:15:05.316910   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:15:05.378581   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:15:05.378581   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:15:05.437209   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:15:05.437209   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:15:05.470250   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:15:05.470314   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:15:05.552045   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:15:05.552045   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:15:05.628942   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:15:05.628942   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:15:05.629817   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:05.629817   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:05.631019   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:15:05.632026   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:15:05.632706   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:15:05.652638   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:15:05.652638   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:15:05.683184   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:15:05.683184   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:15:06.013740   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:15:06.013740   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:15:06.112578   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:15:06.112578   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:15:06.244234   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:15:06.244302   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:15:06.293482   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:15:06.293703   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:15:06.352995   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:15:06.352995   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:15:06.388668   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:15:06.388668   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:15:06.427957   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:15:06.427957   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:15:06.427957   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:15:06.427957   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:15:06.427957   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:15:16.452773   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:15:16.452870   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.452870   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.452870   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.463599   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:15:16.475789   13512 system_pods.go:59] 24 kube-system pods found
	I0328 00:15:16.475789   13512 system_pods.go:61] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000-m03" [f6eb8cee-0103-4081-b8b1-9599dea6fca3] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-bkl4c" [718fd32a-7015-4747-ae2d-cc39f0b83d0a] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000-m03" [0df204d3-193e-454b-97eb-288138c2cdab] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m03" [79799961-0360-4b14-9dc4-c58065b02fd8] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-29dwg" [c2c9700a-d6b4-4c64-bc5e-7d434f2df188] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-scheduler-ha-170000-m03" [7077722d-b2ca-4a1c-9b18-1a5bd8e541e2] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000-m03" [09d0c667-4fa3-47a5-b680-370e05a735f2] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:15:16.476375   13512 system_pods.go:74] duration metric: took 11.4604134s to wait for pod list to return data ...
	I0328 00:15:16.476375   13512 default_sa.go:34] waiting for default service account to be created ...
	I0328 00:15:16.476595   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:15:16.476595   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.476595   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.476595   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.488285   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:15:16.488285   13512 default_sa.go:45] found service account: "default"
	I0328 00:15:16.488285   13512 default_sa.go:55] duration metric: took 11.91ms for default service account to be created ...
	I0328 00:15:16.488285   13512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 00:15:16.488285   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:15:16.488285   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.488285   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.488285   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.499844   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:15:16.510720   13512 system_pods.go:86] 24 kube-system pods found
	I0328 00:15:16.510855   13512 system_pods.go:89] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000-m03" [f6eb8cee-0103-4081-b8b1-9599dea6fca3] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-bkl4c" [718fd32a-7015-4747-ae2d-cc39f0b83d0a] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000-m03" [0df204d3-193e-454b-97eb-288138c2cdab] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m03" [79799961-0360-4b14-9dc4-c58065b02fd8] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-proxy-29dwg" [c2c9700a-d6b4-4c64-bc5e-7d434f2df188] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000-m03" [7077722d-b2ca-4a1c-9b18-1a5bd8e541e2] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000-m03" [09d0c667-4fa3-47a5-b680-370e05a735f2] Running
	I0328 00:15:16.511089   13512 system_pods.go:89] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:15:16.511089   13512 system_pods.go:126] duration metric: took 22.8038ms to wait for k8s-apps to be running ...
	I0328 00:15:16.511089   13512 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 00:15:16.523843   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:15:16.555763   13512 system_svc.go:56] duration metric: took 44.612ms WaitForService to wait for kubelet
	I0328 00:15:16.555797   13512 kubeadm.go:576] duration metric: took 2m8.6674606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:15:16.555868   13512 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:15:16.555915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes
	I0328 00:15:16.556041   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.556041   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.556041   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.561929   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:105] duration metric: took 7.0489ms to run NodePressure ...
	I0328 00:15:16.562917   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:15:16.562917   13512 start.go:254] writing updated cluster config ...
	I0328 00:15:16.579370   13512 ssh_runner.go:195] Run: rm -f paused
	I0328 00:15:16.790310   13512 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 00:15:16.792307   13512 out.go:177] * Done! kubectl is now configured to use "ha-170000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 28 00:05:10 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:05:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c410fb61b51cfd548a8d968814e05e22008f0c805cdc74fcc84137b9dc553eeb/resolv.conf as [nameserver 172.28.224.1]"
	Mar 28 00:05:10 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:05:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5097b6406500f3a7904a730ebf9bcbc84b8dbc1b0dbd50ea40fecc45e7785149/resolv.conf as [nameserver 172.28.224.1]"
	Mar 28 00:05:10 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:05:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/16835a4276f7ba5dc7475845ca6b25cce308df29b8caac3eb9c69872395ae928/resolv.conf as [nameserver 172.28.224.1]"
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.527637946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.527803447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.535174095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.535319296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.632010421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.632423224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.632859027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.638488363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.646889117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.648199226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.648306527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.649054031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702788182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702937683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702953383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.703089083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:15:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9fe22e827be821309accf5ebe49a48347beae58ec00197836b05196adf11b6a0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 28 00:15:59 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:15:59Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.431862212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.433544920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.433817821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.435603130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b83fcd983b8f1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   9fe22e827be82       busybox-7fdf7869d9-jw6s4
	8246295778b70       cbb01a7bd410d                                                                                         11 minutes ago       Running             coredns                   0                   5097b6406500f       coredns-76f75df574-mgrhj
	d8fea38581c75       cbb01a7bd410d                                                                                         11 minutes ago       Running             coredns                   0                   c410fb61b51cf       coredns-76f75df574-5npq4
	c90ed8febdea8       6e38f40d628db                                                                                         11 minutes ago       Running             storage-provisioner       0                   16835a4276f7b       storage-provisioner
	bf50dc1255b37       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              12 minutes ago       Running             kindnet-cni               0                   a8adc945f2124       kindnet-n4x2r
	44afe7b75e4ac       a1d263b5dc5b0                                                                                         12 minutes ago       Running             kube-proxy                0                   ee1d628428649       kube-proxy-w2z74
	99405c5a19ad9       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     12 minutes ago       Running             kube-vip                  0                   a790305a76458       kube-vip-ha-170000
	1ff184616e98c       6052a25da3f97                                                                                         12 minutes ago       Running             kube-controller-manager   0                   4ce90e8d8aa30       kube-controller-manager-ha-170000
	3d72f73e04bee       39f995c9f1996                                                                                         12 minutes ago       Running             kube-apiserver            0                   cc932594c4ded       kube-apiserver-ha-170000
	da083b3d9d734       8c390d98f50c0                                                                                         12 minutes ago       Running             kube-scheduler            0                   ad6e909ec407f       kube-scheduler-ha-170000
	b8c1ccb11ebd4       3861cfcd7c04c                                                                                         12 minutes ago       Running             etcd                      0                   58cd9afeced59       etcd-ha-170000
	
	
	==> coredns [8246295778b7] <==
	[INFO] 10.244.2.2:52967 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000752s
	[INFO] 10.244.2.2:37242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000575s
	[INFO] 10.244.1.2:35324 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000092101s
	[INFO] 10.244.0.4:59929 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000281301s
	[INFO] 10.244.0.4:57682 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244101s
	[INFO] 10.244.2.2:44472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213601s
	[INFO] 10.244.2.2:48809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200001s
	[INFO] 10.244.2.2:44642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000226002s
	[INFO] 10.244.2.2:54650 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000936s
	[INFO] 10.244.1.2:50510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205601s
	[INFO] 10.244.1.2:40738 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032875759s
	[INFO] 10.244.1.2:41252 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062s
	[INFO] 10.244.0.4:57610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234001s
	[INFO] 10.244.0.4:57921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195801s
	[INFO] 10.244.2.2:38740 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135701s
	[INFO] 10.244.2.2:45709 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222601s
	[INFO] 10.244.2.2:59586 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000705s
	[INFO] 10.244.1.2:47697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001047s
	[INFO] 10.244.1.2:55138 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248501s
	[INFO] 10.244.1.2:45737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137101s
	[INFO] 10.244.2.2:51738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294302s
	[INFO] 10.244.2.2:44699 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130901s
	[INFO] 10.244.1.2:51466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156201s
	[INFO] 10.244.1.2:55077 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207701s
	[INFO] 10.244.1.2:34241 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001024s
	
	
	==> coredns [d8fea38581c7] <==
	[INFO] 10.244.0.4:51781 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024278218s
	[INFO] 10.244.0.4:60752 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135s
	[INFO] 10.244.0.4:46184 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023788416s
	[INFO] 10.244.0.4:45507 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223801s
	[INFO] 10.244.0.4:33072 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001496s
	[INFO] 10.244.2.2:37301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186601s
	[INFO] 10.244.2.2:54878 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000263301s
	[INFO] 10.244.2.2:46781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061701s
	[INFO] 10.244.2.2:41724 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001956s
	[INFO] 10.244.1.2:34059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000162301s
	[INFO] 10.244.1.2:46112 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108s
	[INFO] 10.244.1.2:39207 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000285202s
	[INFO] 10.244.1.2:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066s
	[INFO] 10.244.1.2:47050 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057901s
	[INFO] 10.244.0.4:53037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077101s
	[INFO] 10.244.0.4:53530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001593s
	[INFO] 10.244.2.2:52086 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137301s
	[INFO] 10.244.1.2:44769 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189401s
	[INFO] 10.244.0.4:39493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001896s
	[INFO] 10.244.0.4:37692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128s
	[INFO] 10.244.0.4:49225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148201s
	[INFO] 10.244.0.4:59721 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106301s
	[INFO] 10.244.2.2:57268 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088s
	[INFO] 10.244.2.2:54394 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135701s
	[INFO] 10.244.1.2:58771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105s
	
	
	==> describe nodes <==
	Name:               ha-170000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_04_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:16:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:16:18 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:16:18 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:16:18 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:16:18 +0000   Thu, 28 Mar 2024 00:05:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.239.31
	  Hostname:    ha-170000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a770d0428be346a1a9c5e89c2b0227a7
	  System UUID:                9452b03b-f477-1b41-a3a5-ba63fc271926
	  Boot ID:                    286ae28b-54a4-4ee2-9e74-d085b0ae89c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jw6s4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 coredns-76f75df574-5npq4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 coredns-76f75df574-mgrhj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-ha-170000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-n4x2r                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-170000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-170000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-w2z74                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-170000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-170000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-170000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-170000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-170000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node ha-170000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node ha-170000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node ha-170000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	  Normal  NodeReady                11m                kubelet          Node ha-170000 status is now: NodeReady
	  Normal  RegisteredNode           7m54s              node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	  Normal  RegisteredNode           2m24s              node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	
	
	Name:               ha-170000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_08_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:08:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:16:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:09:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.224.3
	  Hostname:    ha-170000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f60ab19a10b942a88b67b15a72ab77d0
	  System UUID:                33d5c3c7-5f0d-1f4a-93fb-c3dc18b4a10f
	  Boot ID:                    9c1d6c65-61e5-4a10-9316-c218e1e8157f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-shnp5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 etcd-ha-170000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m15s
	  kube-system                 kindnet-xf7sr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m16s
	  kube-system                 kube-apiserver-ha-170000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-controller-manager-ha-170000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-proxy-wrvmg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-ha-170000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-vip-ha-170000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  Starting                 8m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m16s (x2 over 8m17s)  kubelet          Node ha-170000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s (x2 over 8m17s)  kubelet          Node ha-170000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s (x2 over 8m17s)  kubelet          Node ha-170000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m13s                  node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	  Normal  NodeReady                7m57s                  kubelet          Node ha-170000-m02 status is now: NodeReady
	  Normal  RegisteredNode           7m54s                  node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	  Normal  RegisteredNode           2m24s                  node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	
	
	Name:               ha-170000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_13_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:12:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:16:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:16:27 +0000   Thu, 28 Mar 2024 00:13:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.227.17
	  Hostname:    ha-170000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e7b215a7a6d4988b3521d84ebec4ac2
	  System UUID:                1ce6e39f-d5cc-944a-9944-0641d98a8c34
	  Boot ID:                    7461441a-4c10-4f12-8c56-536a4b743d7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-lb47v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 etcd-ha-170000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m12s
	  kube-system                 kindnet-bkl4c                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m13s
	  kube-system                 kube-apiserver-ha-170000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-ha-170000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-29dwg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-ha-170000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-vip-ha-170000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-170000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-170000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-170000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar28 00:03] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.205053] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[Mar28 00:04] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.113717] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613584] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.245296] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.257352] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +2.872152] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.237910] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.212945] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.311443] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +12.078812] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.114553] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.394309] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +7.907050] systemd-fstab-generator[1801]: Ignoring "noauto" option for root device
	[  +0.116312] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.606910] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.398530] systemd-fstab-generator[2745]: Ignoring "noauto" option for root device
	[ +15.094612] kauditd_printk_skb: 17 callbacks suppressed
	[Mar28 00:05] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.201287] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 00:07] hrtimer: interrupt took 5469614 ns
	[Mar28 00:08] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [b8c1ccb11ebd] <==
	{"level":"info","ts":"2024-03-28T00:12:53.786503Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"4125916f2488327b","remote-peer-id":"321cb4736f05787e"}
	{"level":"warn","ts":"2024-03-28T00:12:53.89825Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"321cb4736f05787e","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-03-28T00:12:54.160321Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"321cb4736f05787e","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-03-28T00:12:54.765926Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"321cb4736f05787e","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-03-28T00:12:56.13717Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"321cb4736f05787e","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-03-28T00:12:56.539519Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"321cb4736f05787e"}
	{"level":"info","ts":"2024-03-28T00:12:56.539642Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"4125916f2488327b","remote-peer-id":"321cb4736f05787e"}
	{"level":"info","ts":"2024-03-28T00:12:56.539689Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"4125916f2488327b","remote-peer-id":"321cb4736f05787e"}
	{"level":"info","ts":"2024-03-28T00:12:56.625212Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4125916f2488327b","to":"321cb4736f05787e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-28T00:12:56.625258Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"4125916f2488327b","remote-peer-id":"321cb4736f05787e"}
	{"level":"info","ts":"2024-03-28T00:12:56.660478Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"4125916f2488327b","to":"321cb4736f05787e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-28T00:12:56.660519Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"4125916f2488327b","remote-peer-id":"321cb4736f05787e"}
	{"level":"info","ts":"2024-03-28T00:12:58.389197Z","caller":"traceutil/trace.go:171","msg":"trace[1889762869] transaction","detail":"{read_only:false; response_revision:1568; number_of_response:1; }","duration":"285.156355ms","start":"2024-03-28T00:12:58.104021Z","end":"2024-03-28T00:12:58.389177Z","steps":["trace[1889762869] 'process raft request'  (duration: 285.011954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:12:59.115883Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"321cb4736f05787e","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-03-28T00:13:06.009773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4125916f2488327b switched to configuration voters=(453829633083842124 3610959409121163390 4694318093143913083)"}
	{"level":"info","ts":"2024-03-28T00:13:06.010253Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"240df326919e34d3","local-member-id":"4125916f2488327b"}
	{"level":"info","ts":"2024-03-28T00:13:06.010562Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"4125916f2488327b","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"321cb4736f05787e"}
	{"level":"warn","ts":"2024-03-28T00:13:07.30023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.274355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:13062"}
	{"level":"info","ts":"2024-03-28T00:13:07.301572Z","caller":"traceutil/trace.go:171","msg":"trace[1479162201] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:1626; }","duration":"150.650063ms","start":"2024-03-28T00:13:07.150902Z","end":"2024-03-28T00:13:07.301552Z","steps":["trace[1479162201] 'range keys from in-memory index tree'  (duration: 147.03224ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:13:07.301109Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.169807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:13:07.302421Z","caller":"traceutil/trace.go:171","msg":"trace[1502061323] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1626; }","duration":"252.571616ms","start":"2024-03-28T00:13:07.049838Z","end":"2024-03-28T00:13:07.302409Z","steps":["trace[1502061323] 'range keys from in-memory index tree'  (duration: 249.702997ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:14:36.387397Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2024-03-28T00:14:36.511381Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1081,"took":"123.386597ms","hash":536744392,"current-db-size-bytes":3485696,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2019328,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-03-28T00:14:36.511533Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":536744392,"revision":1081,"compact-revision":-1}
	{"level":"info","ts":"2024-03-28T00:15:24.696824Z","caller":"traceutil/trace.go:171","msg":"trace[193068802] transaction","detail":"{read_only:false; response_revision:2003; number_of_response:1; }","duration":"124.775275ms","start":"2024-03-28T00:15:24.572031Z","end":"2024-03-28T00:15:24.696807Z","steps":["trace[193068802] 'process raft request'  (duration: 124.559774ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:17:05 up 14 min,  0 users,  load average: 0.33, 0.65, 0.46
	Linux ha-170000 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf50dc1255b3] <==
	I0328 00:16:17.622899       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:16:27.639135       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:16:27.639258       1 main.go:227] handling current node
	I0328 00:16:27.639274       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:16:27.639283       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:16:27.639787       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:16:27.639855       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:16:37.650593       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:16:37.650698       1 main.go:227] handling current node
	I0328 00:16:37.650821       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:16:37.650833       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:16:37.651063       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:16:37.651082       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:16:47.660431       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:16:47.660617       1 main.go:227] handling current node
	I0328 00:16:47.660633       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:16:47.660642       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:16:47.662063       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:16:47.662098       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:16:57.675794       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:16:57.675904       1 main.go:227] handling current node
	I0328 00:16:57.675920       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:16:57.675928       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:16:57.676284       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:16:57.676317       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3d72f73e04be] <==
	I0328 00:04:43.815274       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 00:04:57.455186       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0328 00:04:57.558625       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0328 00:08:49.266839       1 trace.go:236] Trace[2090941951]: "Update" accept:application/json, */*,audit-id:63f0cbc9-255e-4bbe-9411-84efb47253ff,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Mar-2024 00:08:48.740) (total time: 526ms):
	Trace[2090941951]: ["GuaranteedUpdate etcd3" audit-id:63f0cbc9-255e-4bbe-9411-84efb47253ff,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 526ms (00:08:48.740)
	Trace[2090941951]:  ---"Txn call completed" 525ms (00:08:49.266)]
	Trace[2090941951]: [526.545433ms] [526.545433ms] END
	I0328 00:08:49.269907       1 trace.go:236] Trace[570990909]: "Patch" accept:application/json, */*,audit-id:49fdccb8-2d0a-4844-8a36-7cc93cd9a2d5,client:172.28.224.3,api-group:,api-version:v1,name:ha-170000-m02,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-170000-m02,user-agent:kubeadm/v1.29.3 (linux/amd64) kubernetes/6813625,verb:PATCH (28-Mar-2024 00:08:48.742) (total time: 527ms):
	Trace[570990909]: ["GuaranteedUpdate etcd3" audit-id:49fdccb8-2d0a-4844-8a36-7cc93cd9a2d5,key:/minions/ha-170000-m02,type:*core.Node,resource:nodes 527ms (00:08:48.742)
	Trace[570990909]:  ---"Txn call completed" 520ms (00:08:49.265)]
	Trace[570990909]: ---"Object stored in database" 520ms (00:08:49.265)
	Trace[570990909]: [527.571437ms] [527.571437ms] END
	I0328 00:12:43.594478       1 trace.go:236] Trace[851421756]: "Update" accept:application/json, */*,audit-id:08a38f23-13b1-4699-9cca-883c7d841b02,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Mar-2024 00:12:42.895) (total time: 698ms):
	Trace[851421756]: ["GuaranteedUpdate etcd3" audit-id:08a38f23-13b1-4699-9cca-883c7d841b02,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 698ms (00:12:42.896)
	Trace[851421756]:  ---"Txn call completed" 697ms (00:12:43.594)]
	Trace[851421756]: [698.497987ms] [698.497987ms] END
	I0328 00:12:43.598614       1 trace.go:236] Trace[877059726]: "Get" accept:application/json, */*,audit-id:2fbf2c81-3976-43d3-8176-75f9e5e89a1b,client:172.28.239.31,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (28-Mar-2024 00:12:42.950) (total time: 648ms):
	Trace[877059726]: ---"About to write a response" 648ms (00:12:43.598)
	Trace[877059726]: [648.35245ms] [648.35245ms] END
	E0328 00:12:53.392443       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0328 00:12:53.392586       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0328 00:12:53.392880       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0328 00:12:53.394480       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0328 00:12:53.395166       1 timeout.go:142] post-timeout activity - time-elapsed: 2.836719ms, PATCH "/api/v1/namespaces/default/events/ha-170000-m03.17c0c548415d66c5" result: <nil>
	E0328 00:16:09.562667       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 172.28.239.31:38782->172.28.239.31:10250: write: broken pipe
	
	
	==> kube-controller-manager [1ff184616e98] <==
	I0328 00:12:52.990128       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-s7fw9"
	I0328 00:12:56.729047       1 event.go:376] "Event occurred" object="ha-170000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller"
	I0328 00:12:56.754999       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-170000-m03"
	I0328 00:15:56.619758       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 3"
	I0328 00:15:56.678600       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-shnp5"
	I0328 00:15:56.717113       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-lb47v"
	I0328 00:15:56.717919       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-jw6s4"
	I0328 00:15:56.760350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="140.953247ms"
	I0328 00:15:56.811475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="50.89587ms"
	I0328 00:15:56.811957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="310.102µs"
	I0328 00:15:57.044813       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-dcktt"
	I0328 00:15:57.207900       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="376.791295ms"
	I0328 00:15:57.308193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="99.091325ms"
	I0328 00:15:57.310081       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-6gfqj"
	I0328 00:15:57.341634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="33.221276ms"
	I0328 00:15:57.342073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.8µs"
	I0328 00:15:57.533626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.72608ms"
	I0328 00:15:57.533808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="125.501µs"
	I0328 00:15:57.997591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="192.001µs"
	I0328 00:15:59.737788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="22.225907ms"
	I0328 00:15:59.739007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.5µs"
	I0328 00:15:59.943559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="22.59271ms"
	I0328 00:15:59.943654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.1µs"
	I0328 00:16:00.639771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.05855ms"
	I0328 00:16:00.641028       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.6µs"
	
	
	==> kube-proxy [44afe7b75e4a] <==
	I0328 00:04:58.973139       1 server_others.go:72] "Using iptables proxy"
	I0328 00:04:58.988819       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.239.31"]
	I0328 00:04:59.088028       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:04:59.088060       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:04:59.088078       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:04:59.093647       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:04:59.098135       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:04:59.098325       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:04:59.100225       1 config.go:188] "Starting service config controller"
	I0328 00:04:59.100347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:04:59.100734       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:04:59.100997       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:04:59.102062       1 config.go:315] "Starting node config controller"
	I0328 00:04:59.102249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:04:59.200882       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:04:59.202008       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:04:59.202652       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da083b3d9d73] <==
	W0328 00:04:40.637115       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 00:04:40.637170       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 00:04:40.638567       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 00:04:40.638667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 00:04:40.729652       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:04:40.729808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 00:04:40.732295       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 00:04:40.732696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 00:04:40.777283       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:04:40.777485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 00:04:40.865405       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:04:40.865632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 00:04:42.494142       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0328 00:15:56.715070       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-shnp5\": pod busybox-7fdf7869d9-shnp5 is already assigned to node \"ha-170000-m02\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-shnp5" node="ha-170000-m02"
	E0328 00:15:56.716179       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod eea845c5-e86a-4f91-aa4c-190c2119b444(default/busybox-7fdf7869d9-shnp5) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.716468       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-shnp5\": pod busybox-7fdf7869d9-shnp5 is already assigned to node \"ha-170000-m02\"" pod="default/busybox-7fdf7869d9-shnp5"
	I0328 00:15:56.716618       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-shnp5" node="ha-170000-m02"
	E0328 00:15:56.758748       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lb47v\": pod busybox-7fdf7869d9-lb47v is already assigned to node \"ha-170000-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-lb47v" node="ha-170000-m03"
	E0328 00:15:56.759336       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 930d4502-cdff-45dc-babd-2a6933e098f7(default/busybox-7fdf7869d9-lb47v) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.759643       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lb47v\": pod busybox-7fdf7869d9-lb47v is already assigned to node \"ha-170000-m03\"" pod="default/busybox-7fdf7869d9-lb47v"
	I0328 00:15:56.760022       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-lb47v" node="ha-170000-m03"
	E0328 00:15:56.765846       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jw6s4\": pod busybox-7fdf7869d9-jw6s4 is already assigned to node \"ha-170000\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-jw6s4" node="ha-170000"
	E0328 00:15:56.767099       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 84df2f13-7839-4bd8-8611-52ce5902ebb3(default/busybox-7fdf7869d9-jw6s4) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.770015       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jw6s4\": pod busybox-7fdf7869d9-jw6s4 is already assigned to node \"ha-170000\"" pod="default/busybox-7fdf7869d9-jw6s4"
	I0328 00:15:56.770380       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-jw6s4" node="ha-170000"
	
	
	==> kubelet <==
	Mar 28 00:12:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:12:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:13:44 ha-170000 kubelet[2789]: E0328 00:13:44.026425    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:13:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:13:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:13:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:13:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:14:44 ha-170000 kubelet[2789]: E0328 00:14:44.026029    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:14:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:14:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:14:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:14:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:15:44 ha-170000 kubelet[2789]: E0328 00:15:44.032635    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:15:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:15:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:15:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:15:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:15:56 ha-170000 kubelet[2789]: I0328 00:15:56.747789    2789 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5npq4" podStartSLOduration=659.747674523 podStartE2EDuration="10m59.747674523s" podCreationTimestamp="2024-03-28 00:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 00:05:11.460389378 +0000 UTC m=+27.772987071" watchObservedRunningTime="2024-03-28 00:15:56.747674523 +0000 UTC m=+673.060272216"
	Mar 28 00:15:56 ha-170000 kubelet[2789]: I0328 00:15:56.748491    2789 topology_manager.go:215] "Topology Admit Handler" podUID="84df2f13-7839-4bd8-8611-52ce5902ebb3" podNamespace="default" podName="busybox-7fdf7869d9-jw6s4"
	Mar 28 00:15:56 ha-170000 kubelet[2789]: I0328 00:15:56.904057    2789 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshmm\" (UniqueName: \"kubernetes.io/projected/84df2f13-7839-4bd8-8611-52ce5902ebb3-kube-api-access-lshmm\") pod \"busybox-7fdf7869d9-jw6s4\" (UID: \"84df2f13-7839-4bd8-8611-52ce5902ebb3\") " pod="default/busybox-7fdf7869d9-jw6s4"
	Mar 28 00:16:44 ha-170000 kubelet[2789]: E0328 00:16:44.025693    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:16:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:16:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:16:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:16:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:16:55.838924    6868 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-170000 -n ha-170000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-170000 -n ha-170000: (13.1901453s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (73.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (583.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 status --output json -v=7 --alsologtostderr: (51.5482299s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000:/home/docker/cp-test.txt: (10.3230149s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt"
E0328 00:23:29.013917   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt": (10.209986s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000.txt: (10.1800245s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt": (10.235865s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000_ha-170000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000_ha-170000-m02.txt: (17.8947003s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt": (10.2239213s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m02.txt": (10.2664774s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000_ha-170000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000_ha-170000-m03.txt: (18.0029843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt": (10.2301774s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m03.txt": (10.1141948s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000_ha-170000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000_ha-170000-m04.txt: (17.8555167s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test.txt": (10.1731419s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000_ha-170000-m04.txt": (10.2829023s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m02:/home/docker/cp-test.txt: (10.1912872s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt": (10.0507769s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m02.txt: (10.2327797s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt": (10.307012s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m02_ha-170000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m02_ha-170000.txt: (17.7828615s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt": (10.2803079s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000.txt": (10.1783028s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt: (17.7827389s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt": (10.2111374s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt": (10.2136406s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt: (17.7574353s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test.txt": (10.1159256s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt": (10.2159978s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m03:/home/docker/cp-test.txt
E0328 00:28:29.010640   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m03:/home/docker/cp-test.txt: (10.1764441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt": (10.189761s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m03.txt: (10.1944129s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt": (10.0878517s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m03_ha-170000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m03_ha-170000.txt: (17.7334674s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt": (10.2050603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000.txt": (10.2577714s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt: (17.9071905s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt": (10.1935099s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt": (10.2250115s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt ha-170000-m04:/home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt: (17.9257615s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test.txt": (10.3192752s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt": (10.1793557s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 cp testdata\cp-test.txt ha-170000-m04:/home/docker/cp-test.txt: (10.1578124s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt": (10.2017704s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m04.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m04.txt: exit status 1 (4.7408274s)

                                                
                                                
** stderr ** 
	W0328 00:31:19.652785    9820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m04.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt C:\\Users\\jenkins.minikube6\\AppData\\Local\\Temp\\TestMultiControlPlaneserialCopyFile1641547343\\001\\cp-test_ha-170000-m04.txt" : exit status 1
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m04.txt: The system cannot find the file specified.
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m04_ha-170000.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m04_ha-170000.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m04_ha-170000.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000:/home/docker/cp-test_ha-170000-m04_ha-170000.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000 \"sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000.txt\"" : context deadline exceeded
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m02:/home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m02 \"sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m02.txt\"" : context deadline exceeded
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt: context deadline exceeded (0s)
helpers_test.go:558: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt
helpers_test.go:561: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt ha-170000-m03:/home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m04 \"sudo cat /home/docker/cp-test.txt\"" : context deadline exceeded
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt": context deadline exceeded (0s)
helpers_test.go:536: failed to run command by deadline. exceeded timeout : out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 "sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt"
helpers_test.go:539: failed to run an cp command. args "out/minikube-windows-amd64.exe -p ha-170000 ssh -n ha-170000-m03 \"sudo cat /home/docker/cp-test_ha-170000-m04_ha-170000-m03.txt\"" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-170000 -n ha-170000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-170000 -n ha-170000: (13.1875494s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 logs -n 25: (9.8092901s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	| cp      | ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:26 UTC | 28 Mar 24 00:26 UTC |
	|         | ha-170000:/home/docker/cp-test_ha-170000-m02_ha-170000.txt                                                                |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:26 UTC | 28 Mar 24 00:26 UTC |
	|         | ha-170000-m02 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000 sudo cat                                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:26 UTC | 28 Mar 24 00:27 UTC |
	|         | /home/docker/cp-test_ha-170000-m02_ha-170000.txt                                                                          |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:27 UTC |
	|         | ha-170000-m03:/home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:27 UTC |
	|         | ha-170000-m02 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000-m03 sudo cat                                                                                   | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:27 UTC |
	|         | /home/docker/cp-test_ha-170000-m02_ha-170000-m03.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m02:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:27 UTC | 28 Mar 24 00:28 UTC |
	|         | ha-170000-m04:/home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:28 UTC |
	|         | ha-170000-m02 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000-m04 sudo cat                                                                                   | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:28 UTC |
	|         | /home/docker/cp-test_ha-170000-m02_ha-170000-m04.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-170000 cp testdata\cp-test.txt                                                                                         | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:28 UTC |
	|         | ha-170000-m03:/home/docker/cp-test.txt                                                                                    |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:28 UTC |
	|         | ha-170000-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:28 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m03.txt |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:28 UTC | 28 Mar 24 00:29 UTC |
	|         | ha-170000-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:29 UTC | 28 Mar 24 00:29 UTC |
	|         | ha-170000:/home/docker/cp-test_ha-170000-m03_ha-170000.txt                                                                |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:29 UTC | 28 Mar 24 00:29 UTC |
	|         | ha-170000-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000 sudo cat                                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:29 UTC | 28 Mar 24 00:29 UTC |
	|         | /home/docker/cp-test_ha-170000-m03_ha-170000.txt                                                                          |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:29 UTC | 28 Mar 24 00:30 UTC |
	|         | ha-170000-m02:/home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | ha-170000-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000-m02 sudo cat                                                                                   | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | /home/docker/cp-test_ha-170000-m03_ha-170000-m02.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m03:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | ha-170000-m04:/home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt                                                        |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | ha-170000-m03 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n ha-170000-m04 sudo cat                                                                                   | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:30 UTC |
	|         | /home/docker/cp-test_ha-170000-m03_ha-170000-m04.txt                                                                      |           |                   |                |                     |                     |
	| cp      | ha-170000 cp testdata\cp-test.txt                                                                                         | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:30 UTC | 28 Mar 24 00:31 UTC |
	|         | ha-170000-m04:/home/docker/cp-test.txt                                                                                    |           |                   |                |                     |                     |
	| ssh     | ha-170000 ssh -n                                                                                                          | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:31 UTC | 28 Mar 24 00:31 UTC |
	|         | ha-170000-m04 sudo cat                                                                                                    |           |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |                |                     |                     |
	| cp      | ha-170000 cp ha-170000-m04:/home/docker/cp-test.txt                                                                       | ha-170000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 00:31 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1641547343\001\cp-test_ha-170000-m04.txt |           |                   |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 00:01:24
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 00:01:24.451554   13512 out.go:291] Setting OutFile to fd 796 ...
	I0328 00:01:24.451554   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:24.451554   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:01:24.451554   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:01:24.476415   13512 out.go:298] Setting JSON to false
	I0328 00:01:24.479993   13512 start.go:129] hostinfo: {"hostname":"minikube6","uptime":6745,"bootTime":1711577338,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0328 00:01:24.479993   13512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 00:01:24.486130   13512 out.go:177] * [ha-170000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0328 00:01:24.490137   13512 notify.go:220] Checking for updates...
	I0328 00:01:24.492684   13512 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:01:24.494987   13512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 00:01:24.498052   13512 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0328 00:01:24.500847   13512 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 00:01:24.503280   13512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 00:01:24.507157   13512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 00:01:30.168764   13512 out.go:177] * Using the hyperv driver based on user configuration
	I0328 00:01:30.172808   13512 start.go:297] selected driver: hyperv
	I0328 00:01:30.172808   13512 start.go:901] validating driver "hyperv" against <nil>
	I0328 00:01:30.172808   13512 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 00:01:30.230162   13512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 00:01:30.231286   13512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:01:30.231286   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:01:30.231286   13512 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0328 00:01:30.231286   13512 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 00:01:30.232013   13512 start.go:340] cluster config:
	{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:01:30.232013   13512 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 00:01:30.237048   13512 out.go:177] * Starting "ha-170000" primary control-plane node in "ha-170000" cluster
	I0328 00:01:30.239101   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:01:30.239566   13512 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0328 00:01:30.239566   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:01:30.239720   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:01:30.240052   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:01:30.240286   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:01:30.240286   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json: {Name:mk71d93613833e4ee8cfd8afcb08bb23d0afb004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:01:30.241604   13512 start.go:360] acquireMachinesLock for ha-170000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:01:30.241604   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000"
	I0328 00:01:30.242320   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:01:30.242320   13512 start.go:125] createHost starting for "" (driver="hyperv")
	I0328 00:01:30.245886   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:01:30.246388   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:01:30.246388   13512 client.go:168] LocalClient.Create starting
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:01:30.246774   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:01:30.247487   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:01:30.247487   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:01:30.247893   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:01:30.248034   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:01:32.464702   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:01:32.465750   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:32.465750   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:34.313106   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:35.908115   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:01:39.812126   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:01:39.812126   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:39.814601   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:01:40.325866   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:01:40.463413   13512 main.go:141] libmachine: Creating VM...
	I0328 00:01:40.463413   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:01:43.536968   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:01:43.536968   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:43.537235   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:01:43.537337   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:01:45.429619   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:01:45.429878   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:45.429878   13512 main.go:141] libmachine: Creating VHD
	I0328 00:01:45.429878   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:01:49.400603   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 588AB536-AE95-4F4C-9215-F82B93ECAE3A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:01:49.400603   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:49.400723   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:01:49.400723   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:01:49.410089   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:01:52.737577   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:01:52.737577   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:52.737841   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd' -SizeBytes 20000MB
	I0328 00:01:55.375105   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:01:55.375976   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:55.376062   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:01:59.276195   13512 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-170000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:01:59.276981   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:01:59.276981   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000 -DynamicMemoryEnabled $false
	I0328 00:02:01.686956   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:01.686956   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:01.687149   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000 -Count 2
	I0328 00:02:03.998021   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:03.998507   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:03.998638   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\boot2docker.iso'
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:06.809493   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\disk.vhd'
	I0328 00:02:09.631283   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:09.631283   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:09.631413   13512 main.go:141] libmachine: Starting VM...
	I0328 00:02:09.631413   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000
	I0328 00:02:12.856754   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:12.856754   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:12.856980   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:02:12.857162   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:15.221077   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:15.221923   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:15.221986   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:17.911929   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:17.912102   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:18.926016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:21.244218   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:21.244471   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:21.244557   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:23.870634   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:23.870634   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:24.885016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:27.173814   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:29.801559   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:29.801559   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:30.809119   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:33.097395   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:33.097395   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:33.097680   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:35.725485   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:02:35.725485   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:36.737437   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:39.064704   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:39.064704   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:39.065011   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:41.754244   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:41.754244   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:41.755224   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:44.002408   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:44.002408   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:44.002408   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:02:44.002606   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:46.285687   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:46.286015   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:46.286306   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:49.037569   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:49.038358   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:49.044326   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:49.057418   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:49.057418   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:02:49.200615   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:02:49.200615   13512 buildroot.go:166] provisioning hostname "ha-170000"
	I0328 00:02:49.200615   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:51.508278   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:51.508946   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:51.508946   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:54.197196   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:54.197196   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:54.203066   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:54.203830   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:54.203830   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000 && echo "ha-170000" | sudo tee /etc/hostname
	I0328 00:02:54.378371   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000
	
	I0328 00:02:54.378574   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:02:56.643928   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:02:56.643928   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:56.644651   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:02:59.331865   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:02:59.331865   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:02:59.338512   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:02:59.338712   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:02:59.338712   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:02:59.488578   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:02:59.488711   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:02:59.488711   13512 buildroot.go:174] setting up certificates
	I0328 00:02:59.488711   13512 provision.go:84] configureAuth start
	I0328 00:02:59.488711   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:01.767341   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:01.767341   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:01.768381   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:04.450579   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:04.450579   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:04.451559   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:06.727559   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:06.728083   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:06.728196   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:09.520843   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:09.521424   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:09.521424   13512 provision.go:143] copyHostCerts
	I0328 00:03:09.521659   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:03:09.522138   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:03:09.522138   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:03:09.522429   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:03:09.523970   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:03:09.524237   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:03:09.524315   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:03:09.524655   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:03:09.525708   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:03:09.526067   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:03:09.526145   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:03:09.526458   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:03:09.527180   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000 san=[127.0.0.1 172.28.239.31 ha-170000 localhost minikube]
	I0328 00:03:09.786947   13512 provision.go:177] copyRemoteCerts
	I0328 00:03:09.798987   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:03:09.798987   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:12.056732   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:12.056732   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:12.057308   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:14.770344   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:14.771453   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:14.771453   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:03:14.877447   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0784291s)
	I0328 00:03:14.877447   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:03:14.877447   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:03:14.928727   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:03:14.928849   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0328 00:03:14.980419   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:03:14.980419   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:03:15.046548   13512 provision.go:87] duration metric: took 15.5577428s to configureAuth
	I0328 00:03:15.046548   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:03:15.048004   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:03:15.048004   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:17.303779   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:19.992844   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:19.992844   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:19.998086   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:19.999012   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:19.999012   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:03:20.140497   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:03:20.140497   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:03:20.140767   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:03:20.140767   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:22.376318   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:22.376318   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:22.376553   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:25.103964   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:25.103964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:25.109663   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:25.110103   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:25.110303   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:03:25.286594   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:03:25.286752   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:27.574098   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:27.574565   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:27.574565   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:30.306899   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:30.306899   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:30.313549   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:30.313549   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:30.314145   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:03:32.569540   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:03:32.569540   13512 machine.go:97] duration metric: took 48.5668388s to provisionDockerMachine
	I0328 00:03:32.569540   13512 client.go:171] duration metric: took 2m2.322416s to LocalClient.Create
	I0328 00:03:32.569540   13512 start.go:167] duration metric: took 2m2.322416s to libmachine.API.Create "ha-170000"
	I0328 00:03:32.570374   13512 start.go:293] postStartSetup for "ha-170000" (driver="hyperv")
	I0328 00:03:32.570374   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:03:32.583941   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:03:32.583941   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:34.838964   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:34.838964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:34.840284   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:37.533804   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:37.533804   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:37.534390   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:03:37.653466   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0694944s)
	I0328 00:03:37.666488   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:03:37.674674   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:03:37.674756   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:03:37.674955   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:03:37.676516   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:03:37.676516   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:03:37.688915   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:03:37.710497   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:03:37.765469   13512 start.go:296] duration metric: took 5.1950633s for postStartSetup
	I0328 00:03:37.768050   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:39.988163   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:39.988163   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:39.989240   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:42.727615   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:42.727615   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:42.727615   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:03:42.730928   13512 start.go:128] duration metric: took 2m12.4878102s to createHost
	I0328 00:03:42.731092   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:44.945675   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:44.945675   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:44.945750   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:47.666349   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:47.666499   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:47.672532   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:47.673046   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:47.673046   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:03:47.804700   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584227.820364817
	
	I0328 00:03:47.804792   13512 fix.go:216] guest clock: 1711584227.820364817
	I0328 00:03:47.804792   13512 fix.go:229] Guest: 2024-03-28 00:03:47.820364817 +0000 UTC Remote: 2024-03-28 00:03:42.7310925 +0000 UTC m=+138.490354701 (delta=5.089272317s)
	I0328 00:03:47.804928   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:50.113155   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:50.113189   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:50.113265   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:52.838643   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:52.838853   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:52.846732   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:03:52.847284   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.239.31 22 <nil> <nil>}
	I0328 00:03:52.847444   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584227
	I0328 00:03:52.997164   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:03:47 UTC 2024
	
	I0328 00:03:52.997164   13512 fix.go:236] clock set: Thu Mar 28 00:03:47 UTC 2024
	 (err=<nil>)
	I0328 00:03:52.997164   13512 start.go:83] releasing machines lock for "ha-170000", held for 2m22.7546993s
	I0328 00:03:52.997800   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:55.242776   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:03:57.965723   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:03:57.965723   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:03:57.970792   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:03:57.970953   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:03:57.984844   13512 ssh_runner.go:195] Run: cat /version.json
	I0328 00:03:57.985837   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:00.307907   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:00.308410   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:04:03.031101   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:04:03.031166   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:03.031166   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:04:03.053196   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:04:03.054255   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:03.054313   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:04:03.136308   13512 ssh_runner.go:235] Completed: cat /version.json: (5.1504398s)
	I0328 00:04:03.149467   13512 ssh_runner.go:195] Run: systemctl --version
	I0328 00:04:03.289274   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3181589s)
	I0328 00:04:03.301433   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0328 00:04:03.311571   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:04:03.325076   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:04:03.356978   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:04:03.357064   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:04:03.357224   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:04:03.407602   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:04:03.441947   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:04:03.463952   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:04:03.477193   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:04:03.513455   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:04:03.546805   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:04:03.583159   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:04:03.619690   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:04:03.653485   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:04:03.691356   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:04:03.727252   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:04:03.760867   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:04:03.792080   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:04:03.829094   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:04.045659   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:04:04.081034   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:04:04.094704   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:04:04.133499   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:04:04.173852   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:04:04.232198   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:04:04.274923   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:04:04.313688   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:04:04.380248   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:04:04.405439   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:04:04.453220   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:04:04.480017   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:04:04.501749   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:04:04.551064   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:04:04.788650   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:04:05.003448   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:04:05.003640   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:04:05.055223   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:05.278483   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:04:07.844702   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.566203s)
	I0328 00:04:07.858671   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:04:07.899386   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:04:07.936243   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:04:08.154217   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:04:08.389805   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:08.603241   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:04:08.648899   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:04:08.687517   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:08.926529   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:04:09.041005   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:04:09.055348   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:04:09.065504   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:04:09.081826   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:04:09.108354   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:04:09.198224   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:04:09.211099   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:04:09.261001   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:04:09.311043   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:04:09.311222   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:04:09.316263   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:04:09.316388   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:04:09.316435   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:04:09.316435   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:04:09.320275   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:04:09.320275   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:04:09.334106   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:04:09.343485   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:04:09.385359   13512 kubeadm.go:877] updating cluster {Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 00:04:09.385359   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:04:09.396679   13512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 00:04:09.425081   13512 docker.go:685] Got preloaded images: 
	I0328 00:04:09.425196   13512 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0328 00:04:09.439940   13512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 00:04:09.480958   13512 ssh_runner.go:195] Run: which lz4
	I0328 00:04:09.493141   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0328 00:04:09.507814   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 00:04:09.513931   13512 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 00:04:09.513931   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0328 00:04:11.607629   13512 docker.go:649] duration metric: took 2.1141571s to copy over tarball
	I0328 00:04:11.624012   13512 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 00:04:20.604850   13512 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9807831s)
	I0328 00:04:20.604850   13512 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 00:04:20.682270   13512 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 00:04:20.707204   13512 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0328 00:04:20.756423   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:20.992312   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:04:23.872083   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.879686s)
	I0328 00:04:23.882512   13512 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 00:04:23.908252   13512 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 00:04:23.908252   13512 cache_images.go:84] Images are preloaded, skipping loading
	I0328 00:04:23.908252   13512 kubeadm.go:928] updating node { 172.28.239.31 8443 v1.29.3 docker true true} ...
	I0328 00:04:23.908793   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.239.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:04:23.919286   13512 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 00:04:23.961434   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:04:23.961434   13512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 00:04:23.961434   13512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 00:04:23.961434   13512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.239.31 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-170000 NodeName:ha-170000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.239.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.239.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 00:04:23.962178   13512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.239.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-170000"
	  kubeletExtraArgs:
	    node-ip: 172.28.239.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.239.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 00:04:23.962294   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:04:23.975889   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:04:24.004430   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:04:24.004751   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:04:24.017981   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:04:24.036154   13512 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 00:04:24.048165   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0328 00:04:24.068973   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0328 00:04:24.104129   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:04:24.139639   13512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0328 00:04:24.177582   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0328 00:04:24.227236   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:04:24.235219   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:04:24.273917   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:04:24.506990   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:04:24.540006   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.239.31
	I0328 00:04:24.540067   13512 certs.go:194] generating shared ca certs ...
	I0328 00:04:24.540067   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.540349   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:04:24.540349   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:04:24.540349   13512 certs.go:256] generating profile certs ...
	I0328 00:04:24.541974   13512 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:04:24.541974   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt with IP's: []
	I0328 00:04:24.889732   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt ...
	I0328 00:04:24.889732   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.crt: {Name:mkbdb6d224105d9846941bd7ef796bab37cf0d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.891476   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key ...
	I0328 00:04:24.891476   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key: {Name:mkc77ecfd07cf7c3fc46df723d6f544069ea69a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:24.892258   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec
	I0328 00:04:24.892258   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.239.254]
	I0328 00:04:25.007256   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec ...
	I0328 00:04:25.007256   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec: {Name:mkcb18f777d1e527b25f5e2d8323733bcddf4084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.008261   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec ...
	I0328 00:04:25.008261   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec: {Name:mkf6e652cffa73383c36ee164b4d394733a7b5e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.009975   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.39f1c9ec -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:04:25.021306   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.39f1c9ec -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:04:25.022298   13512 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:04:25.022298   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt with IP's: []
	I0328 00:04:25.110902   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt ...
	I0328 00:04:25.110902   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt: {Name:mkacef89a3d7b6653974b337f3650724fbf38da7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.112847   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key ...
	I0328 00:04:25.112847   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key: {Name:mkde445a6144006913f807287c915aaab44c2514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:25.113116   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:04:25.114052   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:04:25.114277   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:04:25.114946   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:04:25.115098   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:04:25.130129   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:04:25.131047   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:04:25.131489   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:04:25.131489   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:04:25.131864   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:04:25.132110   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:04:25.132363   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:04:25.132363   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.132363   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:25.133921   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:04:25.189389   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:04:25.241672   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:04:25.297242   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:04:25.350750   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 00:04:25.405806   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:04:25.458162   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:04:25.508928   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:04:25.557284   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:04:25.609275   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:04:25.663990   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:04:25.713805   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 00:04:25.760702   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:04:25.784698   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:04:25.820345   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.827272   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.840635   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:04:25.864081   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:04:25.900491   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:04:25.939189   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.948137   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.966079   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:04:25.992570   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:04:26.030316   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:04:26.067296   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.077075   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.091019   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:04:26.114155   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:04:26.147424   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:04:26.155773   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:04:26.156426   13512 kubeadm.go:391] StartCluster: {Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clu
sterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:04:26.167324   13512 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 00:04:26.206480   13512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 00:04:26.238384   13512 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 00:04:26.272794   13512 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 00:04:26.298496   13512 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 00:04:26.298496   13512 kubeadm.go:156] found existing configuration files:
	
	I0328 00:04:26.313645   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 00:04:26.335191   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 00:04:26.348467   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 00:04:26.380500   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 00:04:26.401563   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 00:04:26.415106   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 00:04:26.446778   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 00:04:26.463679   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 00:04:26.477030   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 00:04:26.509373   13512 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 00:04:26.528993   13512 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 00:04:26.542127   13512 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 00:04:26.563204   13512 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 00:04:27.068553   13512 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 00:04:44.050285   13512 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 00:04:44.050285   13512 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 00:04:44.050285   13512 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 00:04:44.050977   13512 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 00:04:44.051256   13512 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 00:04:44.051256   13512 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 00:04:44.057027   13512 out.go:204]   - Generating certificates and keys ...
	I0328 00:04:44.057295   13512 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 00:04:44.057488   13512 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 00:04:44.057608   13512 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 00:04:44.057677   13512 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 00:04:44.058263   13512 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 00:04:44.058498   13512 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-170000 localhost] and IPs [172.28.239.31 127.0.0.1 ::1]
	I0328 00:04:44.058601   13512 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 00:04:44.058804   13512 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-170000 localhost] and IPs [172.28.239.31 127.0.0.1 ::1]
	I0328 00:04:44.058888   13512 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 00:04:44.058990   13512 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 00:04:44.059083   13512 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 00:04:44.059123   13512 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 00:04:44.059208   13512 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 00:04:44.059800   13512 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 00:04:44.059919   13512 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 00:04:44.063116   13512 out.go:204]   - Booting up control plane ...
	I0328 00:04:44.063116   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 00:04:44.063116   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 00:04:44.063804   13512 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 00:04:44.064082   13512 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 00:04:44.064216   13512 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 00:04:44.064216   13512 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 00:04:44.064216   13512 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 00:04:44.064861   13512 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.581290 seconds
	I0328 00:04:44.065265   13512 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 00:04:44.065579   13512 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 00:04:44.065770   13512 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 00:04:44.065770   13512 kubeadm.go:309] [mark-control-plane] Marking the node ha-170000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 00:04:44.065770   13512 kubeadm.go:309] [bootstrap-token] Using token: bbl8hi.q2n8vw1p7nxt5s93
	I0328 00:04:44.069132   13512 out.go:204]   - Configuring RBAC rules ...
	I0328 00:04:44.069191   13512 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 00:04:44.069191   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 00:04:44.069900   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 00:04:44.070097   13512 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 00:04:44.070801   13512 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 00:04:44.071001   13512 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 00:04:44.071001   13512 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 00:04:44.071001   13512 kubeadm.go:309] 
	I0328 00:04:44.071001   13512 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 00:04:44.071700   13512 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 00:04:44.071817   13512 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 00:04:44.071817   13512 kubeadm.go:309] 
	I0328 00:04:44.072036   13512 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 00:04:44.072036   13512 kubeadm.go:309] 
	I0328 00:04:44.072243   13512 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 00:04:44.072243   13512 kubeadm.go:309] 
	I0328 00:04:44.072583   13512 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 00:04:44.072789   13512 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 00:04:44.072789   13512 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 00:04:44.073021   13512 kubeadm.go:309] 
	I0328 00:04:44.073191   13512 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 00:04:44.073387   13512 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 00:04:44.073387   13512 kubeadm.go:309] 
	I0328 00:04:44.073607   13512 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token bbl8hi.q2n8vw1p7nxt5s93 \
	I0328 00:04:44.073811   13512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a \
	I0328 00:04:44.073811   13512 kubeadm.go:309] 	--control-plane 
	I0328 00:04:44.073811   13512 kubeadm.go:309] 
	I0328 00:04:44.073811   13512 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 00:04:44.073811   13512 kubeadm.go:309] 
	I0328 00:04:44.074416   13512 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token bbl8hi.q2n8vw1p7nxt5s93 \
	I0328 00:04:44.074416   13512 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a 
	I0328 00:04:44.074689   13512 cni.go:84] Creating CNI manager for ""
	I0328 00:04:44.074689   13512 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 00:04:44.079039   13512 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 00:04:44.094935   13512 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 00:04:44.104368   13512 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 00:04:44.104368   13512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 00:04:44.180388   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 00:04:44.887312   13512 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 00:04:44.902050   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000 minikube.k8s.io/updated_at=2024_03_28T00_04_44_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=true
	I0328 00:04:44.902875   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:44.919130   13512 ops.go:34] apiserver oom_adj: -16
	I0328 00:04:45.221398   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:45.730101   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:46.233081   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:46.734954   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:47.222864   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:47.729957   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:48.226894   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:48.734611   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:49.233999   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:49.722783   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:50.237639   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:50.730152   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:51.236596   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:51.724284   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:52.229512   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:52.736471   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:53.229735   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:53.722544   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:54.229320   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:54.735039   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:55.235745   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:55.723650   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.231817   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.724414   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 00:04:56.874418   13512 kubeadm.go:1107] duration metric: took 11.9869745s to wait for elevateKubeSystemPrivileges
	W0328 00:04:56.874418   13512 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 00:04:56.874418   13512 kubeadm.go:393] duration metric: took 30.7178043s to StartCluster
	I0328 00:04:56.874418   13512 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:56.874418   13512 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:04:56.877203   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:04:56.878666   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 00:04:56.878763   13512 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:04:56.878763   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:04:56.878889   13512 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 00:04:56.878994   13512 addons.go:69] Setting storage-provisioner=true in profile "ha-170000"
	I0328 00:04:56.878994   13512 addons.go:69] Setting default-storageclass=true in profile "ha-170000"
	I0328 00:04:56.878994   13512 addons.go:234] Setting addon storage-provisioner=true in "ha-170000"
	I0328 00:04:56.879107   13512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-170000"
	I0328 00:04:56.879147   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:04:56.879398   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:04:56.881038   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:56.881341   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:57.053117   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 00:04:57.696972   13512 start.go:948] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0328 00:04:59.262245   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:59.262524   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:59.265237   13512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 00:04:59.262596   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:04:59.265278   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:04:59.265907   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:04:59.267542   13512 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:04:59.267542   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 00:04:59.267542   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:04:59.268231   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 00:04:59.269341   13512 cert_rotation.go:137] Starting client certificate rotation controller
	I0328 00:04:59.269341   13512 addons.go:234] Setting addon default-storageclass=true in "ha-170000"
	I0328 00:04:59.269976   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:04:59.270802   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:05:01.631356   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:01.631356   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:01.631583   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:01.748305   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:01.749007   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:01.749081   13512 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 00:05:01.749081   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 00:05:01.749081   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:05:04.071353   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:04.071476   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:04.071541   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:04.476614   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:05:04.625714   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:04.626530   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:05:04.779520   13512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 00:05:06.863579   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:05:06.863579   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:06.864650   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:05:07.008495   13512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 00:05:07.250980   13512 round_trippers.go:463] GET https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0328 00:05:07.250980   13512 round_trippers.go:469] Request Headers:
	I0328 00:05:07.250980   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:05:07.250980   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:05:07.265874   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:05:07.268022   13512 round_trippers.go:463] PUT https://172.28.239.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0328 00:05:07.268022   13512 round_trippers.go:469] Request Headers:
	I0328 00:05:07.268022   13512 round_trippers.go:473]     Content-Type: application/json
	I0328 00:05:07.268022   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:05:07.268022   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:05:07.275661   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:05:07.280569   13512 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0328 00:05:07.283444   13512 addons.go:505] duration metric: took 10.4044913s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0328 00:05:07.283444   13512 start.go:245] waiting for cluster config update ...
	I0328 00:05:07.283444   13512 start.go:254] writing updated cluster config ...
	I0328 00:05:07.285862   13512 out.go:177] 
	I0328 00:05:07.297882   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:05:07.298076   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:05:07.304011   13512 out.go:177] * Starting "ha-170000-m02" control-plane node in "ha-170000" cluster
	I0328 00:05:07.306748   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:05:07.306808   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:05:07.307204   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:05:07.307371   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:05:07.307650   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:05:07.314009   13512 start.go:360] acquireMachinesLock for ha-170000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:05:07.314009   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000-m02"
	I0328 00:05:07.314009   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:05:07.314009   13512 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0328 00:05:07.319405   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:05:07.319405   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:05:07.319405   13512 client.go:168] LocalClient.Create starting
	I0328 00:05:07.320428   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:05:07.320706   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:05:07.321126   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:05:07.321126   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:05:07.321126   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:09.356776   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:05:11.305061   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:05:11.306046   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:11.306206   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:05:12.914979   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:05:12.915461   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:12.915523   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:05:16.903950   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:05:16.903950   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:16.906660   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:05:17.446540   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:05:17.511172   13512 main.go:141] libmachine: Creating VM...
	I0328 00:05:17.511172   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:20.612723   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:05:20.612723   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:22.578617   13512 main.go:141] libmachine: Creating VHD
	I0328 00:05:22.578617   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:05:26.537687   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 076652CA-7F4B-4D65-839E-2816676E6A32
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:05:26.538010   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:26.538137   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:05:26.538137   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:05:26.538927   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:29.843016   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd' -SizeBytes 20000MB
	I0328 00:05:32.515821   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:32.516843   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:32.516843   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-170000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:36.383060   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000-m02 -DynamicMemoryEnabled $false
	I0328 00:05:38.754580   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:38.754580   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:38.754850   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000-m02 -Count 2
	I0328 00:05:41.099786   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:41.100212   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:41.100325   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\boot2docker.iso'
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:43.851046   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\disk.vhd'
	I0328 00:05:46.682875   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:46.683421   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:46.683421   13512 main.go:141] libmachine: Starting VM...
	I0328 00:05:46.683421   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000-m02
	I0328 00:05:49.931694   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:49.931694   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:49.931694   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:05:49.931899   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:52.337624   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:05:55.018777   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:05:55.018777   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:56.028175   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:05:58.368907   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:01.108016   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:01.108016   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:02.119779   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:04.480406   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:07.157667   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:07.157667   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:08.162347   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:10.468190   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:10.469004   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:10.469065   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:13.157313   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:06:13.157313   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:14.172498   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:16.526683   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:19.333183   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:19.333998   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:19.333998   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:21.590768   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:21.590768   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:21.590768   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:06:21.591673   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:23.904071   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:26.667444   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:26.667444   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:26.674043   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:26.674258   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:26.674258   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:06:26.811696   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:06:26.811696   13512 buildroot.go:166] provisioning hostname "ha-170000-m02"
	I0328 00:06:26.811696   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:29.131444   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:29.131444   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:29.131765   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:31.837883   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:31.837883   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:31.845490   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:31.846118   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:31.846118   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000-m02 && echo "ha-170000-m02" | sudo tee /etc/hostname
	I0328 00:06:32.030205   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000-m02
	
	I0328 00:06:32.030264   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:34.332975   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:34.332975   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:34.333082   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:37.053625   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:37.054770   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:37.060282   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:37.060282   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:37.060921   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:06:37.223529   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:06:37.223529   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:06:37.223529   13512 buildroot.go:174] setting up certificates
	I0328 00:06:37.223529   13512 provision.go:84] configureAuth start
	I0328 00:06:37.223529   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:39.519694   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:42.281656   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:42.282072   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:42.282148   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:44.553218   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:47.273230   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:47.274092   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:47.274227   13512 provision.go:143] copyHostCerts
	I0328 00:06:47.274420   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:06:47.274730   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:06:47.274730   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:06:47.275170   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:06:47.276372   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:06:47.276812   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:06:47.276940   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:06:47.277407   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:06:47.278410   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:06:47.278692   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:06:47.278768   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:06:47.279100   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:06:47.279971   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000-m02 san=[127.0.0.1 172.28.224.3 ha-170000-m02 localhost minikube]
	I0328 00:06:47.524734   13512 provision.go:177] copyRemoteCerts
	I0328 00:06:47.540342   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:06:47.540444   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:49.853777   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:49.854847   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:49.854977   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:52.656964   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:52.656964   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:52.657733   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:06:52.778676   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.2383016s)
	I0328 00:06:52.778676   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:06:52.778676   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:06:52.829546   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:06:52.830230   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 00:06:52.883823   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:06:52.884465   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 00:06:52.937921   13512 provision.go:87] duration metric: took 15.714296s to configureAuth
	I0328 00:06:52.937921   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:06:52.938614   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:06:52.938614   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:55.228234   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:06:57.962599   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:06:57.963410   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:06:57.969438   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:06:57.970172   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:06:57.970172   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:06:58.115270   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:06:58.115270   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:06:58.115270   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:06:58.115532   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:00.418973   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:00.419513   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:00.419581   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:03.119785   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:03.120911   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:03.126511   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:03.127324   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:03.127324   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.239.31"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:07:03.298421   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.239.31
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:07:03.298421   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:05.554691   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:05.555514   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:05.555584   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:08.333940   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:08.334948   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:08.341896   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:08.342516   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:08.342701   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:07:10.618646   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:07:10.618873   13512 machine.go:97] duration metric: took 49.0270489s to provisionDockerMachine
	I0328 00:07:10.618949   13512 client.go:171] duration metric: took 2m3.2987162s to LocalClient.Create
	I0328 00:07:10.619019   13512 start.go:167] duration metric: took 2m3.2988614s to libmachine.API.Create "ha-170000"
	I0328 00:07:10.619019   13512 start.go:293] postStartSetup for "ha-170000-m02" (driver="hyperv")
	I0328 00:07:10.619019   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:07:10.635634   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:07:10.635634   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:12.926535   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:15.678792   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:15.678792   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:15.679454   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:15.789362   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.153697s)
	I0328 00:07:15.803473   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:07:15.810731   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:07:15.810731   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:07:15.810731   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:07:15.811683   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:07:15.811683   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:07:15.826457   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:07:15.848393   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:07:15.901736   13512 start.go:296] duration metric: took 5.2826853s for postStartSetup
	I0328 00:07:15.905918   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:18.239812   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:18.239812   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:18.240530   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:21.023021   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:21.023021   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:21.023021   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:07:21.026324   13512 start.go:128] duration metric: took 2m13.7114986s to createHost
	I0328 00:07:21.026435   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:23.327939   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:26.034219   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:26.034219   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:26.039833   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:26.040607   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:26.040607   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:07:26.183476   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584446.187547843
	
	I0328 00:07:26.183808   13512 fix.go:216] guest clock: 1711584446.187547843
	I0328 00:07:26.183808   13512 fix.go:229] Guest: 2024-03-28 00:07:26.187547843 +0000 UTC Remote: 2024-03-28 00:07:21.0264354 +0000 UTC m=+356.784366001 (delta=5.161112443s)
	I0328 00:07:26.183808   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:28.466124   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:31.201082   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:31.201195   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:31.208394   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:07:31.209080   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.224.3 22 <nil> <nil>}
	I0328 00:07:31.209080   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584446
	I0328 00:07:31.367297   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:07:26 UTC 2024
	
	I0328 00:07:31.368063   13512 fix.go:236] clock set: Thu Mar 28 00:07:26 UTC 2024
	 (err=<nil>)
	I0328 00:07:31.368157   13512 start.go:83] releasing machines lock for "ha-170000-m02", held for 2m24.0531746s
	I0328 00:07:31.368403   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:33.694120   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:33.694120   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:33.694288   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:36.422801   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:36.422801   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:36.426247   13512 out.go:177] * Found network options:
	I0328 00:07:36.429394   13512 out.go:177]   - NO_PROXY=172.28.239.31
	W0328 00:07:36.431972   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:07:36.434677   13512 out.go:177]   - NO_PROXY=172.28.239.31
	W0328 00:07:36.437169   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:07:36.438588   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:07:36.441320   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:07:36.441320   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:36.451302   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:07:36.451302   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m02 ).state
	I0328 00:07:38.751450   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:38.751662   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:38.751662   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:38.773029   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:38.774019   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:38.774101   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:41.575039   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:41.575039   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:41.576523   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:41.603110   13512 main.go:141] libmachine: [stdout =====>] : 172.28.224.3
	
	I0328 00:07:41.603215   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:41.603770   13512 sshutil.go:53] new ssh client: &{IP:172.28.224.3 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m02\id_rsa Username:docker}
	I0328 00:07:41.674730   13512 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2232981s)
	W0328 00:07:41.674730   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:07:41.687428   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:07:41.763236   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:07:41.763355   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:07:41.763236   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.3218829s)
	I0328 00:07:41.763599   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:07:41.817616   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:07:41.852851   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:07:41.872994   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:07:41.885629   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:07:41.923626   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:07:41.960487   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:07:41.995368   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:07:42.034518   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:07:42.071006   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:07:42.103255   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:07:42.136287   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:07:42.175670   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:07:42.210074   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:07:42.244740   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:42.463312   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:07:42.500475   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:07:42.515066   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:07:42.553167   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:07:42.594906   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:07:42.643785   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:07:42.681262   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:07:42.718178   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:07:42.783718   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:07:42.812479   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:07:42.866745   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:07:42.889106   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:07:42.910627   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:07:42.962501   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:07:43.179611   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:07:43.400250   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:07:43.400250   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:07:43.449535   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:43.679147   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:07:46.250045   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5708825s)
	I0328 00:07:46.262711   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:07:46.301215   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:07:46.339722   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:07:46.568199   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:07:46.794560   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:47.030753   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:07:47.080010   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:07:47.119907   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:07:47.337692   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:07:47.466608   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:07:47.479640   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:07:47.491643   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:07:47.504248   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:07:47.524970   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:07:47.610796   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:07:47.620775   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:07:47.663691   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:07:47.703279   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:07:47.706887   13512 out.go:177]   - env NO_PROXY=172.28.239.31
	I0328 00:07:47.709891   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:07:47.713908   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:07:47.717912   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:07:47.717912   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:07:47.729907   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:07:47.737125   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:07:47.761024   13512 mustload.go:65] Loading cluster: ha-170000
	I0328 00:07:47.761139   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:07:47.762261   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:07:50.028825   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:50.028825   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:50.029428   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:07:50.030370   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.224.3
	I0328 00:07:50.030409   13512 certs.go:194] generating shared ca certs ...
	I0328 00:07:50.030409   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.030998   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:07:50.031532   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:07:50.031859   13512 certs.go:256] generating profile certs ...
	I0328 00:07:50.032046   13512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:07:50.032046   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e
	I0328 00:07:50.032873   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.224.3 172.28.239.254]
	I0328 00:07:50.216254   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e ...
	I0328 00:07:50.216254   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e: {Name:mkbc210cc81156f002a806a051ff57fc39befd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.217662   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e ...
	I0328 00:07:50.217662   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e: {Name:mke26bef036ed69d4e4700d974f12ab136fbdff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:07:50.219610   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.37ab393e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:07:50.232866   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.37ab393e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:07:50.233437   13512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:07:50.234450   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:07:50.234638   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:07:50.234889   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:07:50.234889   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:07:50.235284   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:07:50.235463   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:07:50.235752   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:07:50.235752   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:07:50.236244   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:07:50.237112   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:07:50.237278   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:07:50.237824   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:07:50.238382   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:07:50.238413   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:07:50.239331   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:50.239566   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:07:50.240598   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:07:52.504337   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:07:52.504337   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:52.504924   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:07:55.257423   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:07:55.257423   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:07:55.257423   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:07:55.366960   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0328 00:07:55.375363   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0328 00:07:55.409034   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0328 00:07:55.417029   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0328 00:07:55.449871   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0328 00:07:55.457980   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0328 00:07:55.492079   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0328 00:07:55.500387   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0328 00:07:55.534896   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0328 00:07:55.542786   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0328 00:07:55.580128   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0328 00:07:55.587932   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0328 00:07:55.614540   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:07:55.672567   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:07:55.726887   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:07:55.787545   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:07:55.842747   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0328 00:07:55.896133   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:07:55.953949   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:07:56.005802   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:07:56.054356   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:07:56.108316   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:07:56.159317   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:07:56.210148   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0328 00:07:56.243540   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0328 00:07:56.280008   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0328 00:07:56.316776   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0328 00:07:56.350633   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0328 00:07:56.382606   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0328 00:07:56.415104   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0328 00:07:56.467559   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:07:56.496461   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:07:56.532306   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.541192   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.555348   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:07:56.578094   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:07:56.614533   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:07:56.647429   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.654961   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.670271   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:07:56.692656   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:07:56.728495   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:07:56.761630   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.770348   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.784274   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:07:56.812857   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:07:56.847038   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:07:56.855653   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:07:56.855860   13512 kubeadm.go:928] updating node {m02 172.28.224.3 8443 v1.29.3 docker true true} ...
	I0328 00:07:56.856042   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.224.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:07:56.856138   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:07:56.869755   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:07:56.897488   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:07:56.897916   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:07:56.910850   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:07:56.929454   13512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0328 00:07:56.943975   13512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0328 00:07:56.967618   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet
	I0328 00:07:56.968087   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm
	I0328 00:07:56.968087   13512 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl
	I0328 00:07:58.111339   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:07:58.123061   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:07:58.139077   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 00:07:58.139356   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0328 00:07:58.186374   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:07:58.198369   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:07:58.266432   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 00:07:58.266876   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0328 00:07:59.003371   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:07:59.032007   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:07:59.049904   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:07:59.058087   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 00:07:59.058392   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0328 00:07:59.721862   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0328 00:07:59.743466   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0328 00:07:59.778259   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:07:59.813445   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:07:59.865215   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:07:59.873961   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:07:59.913405   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:08:00.143254   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:08:00.174620   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:08:00.174906   13512 start.go:316] joinCluster: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:08:00.175545   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0328 00:08:00.175545   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:08:02.389273   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:08:02.389273   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:08:02.389733   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:08:05.110580   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:08:05.110848   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:08:05.111401   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:08:05.350405   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1748285s)
	I0328 00:08:05.350496   13512 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:08:05.350582   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2u3x2e.vtauwqwzkqqj4wk1 --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m02 --control-plane --apiserver-advertise-address=172.28.224.3 --apiserver-bind-port=8443"
	I0328 00:08:55.545770   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2u3x2e.vtauwqwzkqqj4wk1 --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m02 --control-plane --apiserver-advertise-address=172.28.224.3 --apiserver-bind-port=8443": (50.1948777s)
	I0328 00:08:55.545982   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0328 00:08:56.572142   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.026106s)
	I0328 00:08:56.585707   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000-m02 minikube.k8s.io/updated_at=2024_03_28T00_08_56_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=false
	I0328 00:08:56.773565   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170000-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0328 00:08:56.948520   13512 start.go:318] duration metric: took 56.7732633s to joinCluster
	I0328 00:08:56.948778   13512 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:08:56.951924   13512 out.go:177] * Verifying Kubernetes components...
	I0328 00:08:56.949474   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:08:56.969533   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:08:57.441033   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:08:57.492077   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:08:57.493117   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0328 00:08:57.493197   13512 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.239.31:8443
	I0328 00:08:57.493747   13512 node_ready.go:35] waiting up to 6m0s for node "ha-170000-m02" to be "Ready" ...
	I0328 00:08:57.494335   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:57.494404   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:57.494404   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:57.494404   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:57.510313   13512 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0328 00:08:58.009266   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:58.009266   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:58.009266   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:58.009266   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:58.014467   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:08:58.500133   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:58.500221   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:58.500221   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:58.500221   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:58.506830   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:08:59.005105   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.005464   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.005544   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.005568   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:59.015001   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:08:59.494552   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.494552   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.494552   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.494552   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:08:59.499552   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:08:59.500343   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:08:59.997110   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:08:59.997110   13512 round_trippers.go:469] Request Headers:
	I0328 00:08:59.997110   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:08:59.997110   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:00.002973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:00.501969   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:00.501969   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:00.501969   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:00.501969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:00.506627   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:00.994982   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:00.995033   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:00.995033   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:00.995033   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:01.000668   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:01.501526   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:01.501526   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:01.501526   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:01.501526   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:01.509135   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:01.510821   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:02.007836   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:02.007912   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:02.007912   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:02.007912   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:02.013664   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:02.497593   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:02.497691   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:02.497691   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:02.497691   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:02.504768   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:03.006068   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.006139   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.006139   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.006139   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:03.295318   13512 round_trippers.go:574] Response Status: 200 OK in 288 milliseconds
	I0328 00:09:03.506405   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.506463   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.506463   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.506463   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:03.511672   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:03.512532   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:03.994483   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:03.994616   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:03.994681   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:03.994681   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:04.000163   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:04.497764   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:04.497825   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:04.497825   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:04.497825   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:04.504490   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:04.999468   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:04.999568   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:04.999568   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:04.999568   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:05.005516   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:05.504541   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:05.504541   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:05.504541   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:05.504541   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:05.510072   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:06.008969   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:06.008969   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:06.008969   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:06.008969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:06.016375   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:09:06.017292   13512 node_ready.go:53] node "ha-170000-m02" has status "Ready":"False"
	I0328 00:09:06.502661   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:06.502661   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:06.502661   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:06.502661   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:06.508555   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:07.006915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.006975   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.006975   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.006975   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:07.013757   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:07.508439   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.508439   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.508439   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.508439   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:07.513891   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:07.995347   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:07.995347   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:07.995347   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:07.995347   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.003942   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:08.005281   13512 node_ready.go:49] node "ha-170000-m02" has status "Ready":"True"
	I0328 00:09:08.005281   13512 node_ready.go:38] duration metric: took 10.511469s for node "ha-170000-m02" to be "Ready" ...
	I0328 00:09:08.005357   13512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:09:08.005524   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:08.005524   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.005524   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.005524   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.013834   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:08.022887   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.022887   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-5npq4
	I0328 00:09:08.022887   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.022887   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.022887   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.027691   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.029489   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.029489   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.029608   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.029608   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.033702   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.034629   13512 pod_ready.go:92] pod "coredns-76f75df574-5npq4" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.034629   13512 pod_ready.go:81] duration metric: took 11.7424ms for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.034629   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.034629   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-mgrhj
	I0328 00:09:08.034629   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.034629   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.034629   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.038982   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.040076   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.040076   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.040076   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.040076   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.045395   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:08.047072   13512 pod_ready.go:92] pod "coredns-76f75df574-mgrhj" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.047072   13512 pod_ready.go:81] duration metric: took 12.4424ms for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.047155   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.047217   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000
	I0328 00:09:08.047217   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.047217   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.047217   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.062731   13512 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0328 00:09:08.064273   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:08.064381   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.064381   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.064381   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.067761   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:09:08.069552   13512 pod_ready.go:92] pod "etcd-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:08.069552   13512 pod_ready.go:81] duration metric: took 22.3969ms for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.069552   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:08.069786   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:08.069786   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.069786   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.069786   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.074176   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.075213   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:08.075213   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.075213   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.075213   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.080025   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:08.576937   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:08.576937   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.577039   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.577039   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.583362   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:08.584315   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:08.584417   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:08.584417   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:08.584417   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:08.588793   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:09.085472   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:09.085472   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.085472   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.085472   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.090004   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:09.092184   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:09.092184   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.092184   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.092331   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.108495   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:09:09.581701   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:09.581918   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.581918   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.581918   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.588225   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:09.589829   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:09.589862   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:09.589862   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:09.589862   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:09.594801   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:10.072783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:10.072919   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.072919   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.072919   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.078373   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:10.080683   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:10.080756   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.080756   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.080756   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.085417   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:10.086349   13512 pod_ready.go:102] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"False"
	I0328 00:09:10.580809   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:10.580928   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.580928   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.580928   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.595269   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:09:10.596905   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:10.596972   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:10.596972   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:10.596972   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:10.600950   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:09:11.069987   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:09:11.069987   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.069987   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.069987   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.076105   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:11.076996   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.076996   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.076996   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.076996   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.081993   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:11.083508   13512 pod_ready.go:92] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.083571   13512 pod_ready.go:81] duration metric: took 3.0139995s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.083571   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.083626   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:09:11.083626   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.083626   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.083626   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.088192   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:11.089192   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.089192   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.089192   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.089192   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.095227   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:11.096247   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.096247   13512 pod_ready.go:81] duration metric: took 12.6762ms for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.096247   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.096247   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:09:11.096247   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.096247   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.096247   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.101280   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:11.210636   13512 request.go:629] Waited for 107.4398ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.210738   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:11.210738   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.210738   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.210738   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.220758   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:09:11.221499   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.221499   13512 pod_ready.go:81] duration metric: took 125.2509ms for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.221499   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.397809   13512 request.go:629] Waited for 176.185ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:09:11.397950   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:09:11.397994   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.397994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.397994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.403406   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:11.600959   13512 request.go:629] Waited for 196.1563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.601208   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:11.601208   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.601208   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.601208   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.610580   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:09:11.610580   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:11.610580   13512 pod_ready.go:81] duration metric: took 389.0793ms for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.610580   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:11.803766   13512 request.go:629] Waited for 192.156ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:09:11.803880   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:09:11.803880   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:11.803880   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:11.804056   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:11.809478   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.005669   13512 request.go:629] Waited for 195.0792ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.005910   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.005910   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.005910   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.005910   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.010993   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:12.012190   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.012237   13512 pod_ready.go:81] duration metric: took 401.6539ms for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.012237   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.207324   13512 request.go:629] Waited for 195.0862ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:09:12.207324   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:09:12.207324   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.207324   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.207324   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.212918   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.409814   13512 request.go:629] Waited for 195.4469ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:12.410372   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:12.410372   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.410372   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.410372   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.415722   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.417482   13512 pod_ready.go:92] pod "kube-proxy-w2z74" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.417608   13512 pod_ready.go:81] duration metric: took 405.3683ms for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.417608   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.598869   13512 request.go:629] Waited for 181.0726ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:09:12.599176   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:09:12.599176   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.599176   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.599176   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.604624   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.801254   13512 request.go:629] Waited for 194.9639ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.801513   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:12.801513   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:12.801513   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:12.801513   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:12.806837   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:12.808030   13512 pod_ready.go:92] pod "kube-proxy-wrvmg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:12.808030   13512 pod_ready.go:81] duration metric: took 390.4204ms for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:12.808607   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.006922   13512 request.go:629] Waited for 198.313ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:09:13.007252   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:09:13.007252   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.007252   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.007252   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.013306   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:09:13.195631   13512 request.go:629] Waited for 180.9582ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:13.195740   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:09:13.195740   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.195909   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.195909   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.201619   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:09:13.202988   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:13.202988   13512 pod_ready.go:81] duration metric: took 394.3786ms for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.203061   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.398942   13512 request.go:629] Waited for 195.8226ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:09:13.399172   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:09:13.399172   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.399294   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.399294   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.408451   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.603041   13512 request.go:629] Waited for 192.7566ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:13.603211   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:09:13.603211   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.603211   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.603211   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.613103   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.614241   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:09:13.614241   13512 pod_ready.go:81] duration metric: took 411.1768ms for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:09:13.614315   13512 pod_ready.go:38] duration metric: took 5.6089231s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:09:13.614373   13512 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:09:13.627313   13512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:09:13.660497   13512 api_server.go:72] duration metric: took 16.7115223s to wait for apiserver process to appear ...
	I0328 00:09:13.660497   13512 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:09:13.660497   13512 api_server.go:253] Checking apiserver healthz at https://172.28.239.31:8443/healthz ...
	I0328 00:09:13.672402   13512 api_server.go:279] https://172.28.239.31:8443/healthz returned 200:
	ok
	I0328 00:09:13.672402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/version
	I0328 00:09:13.672402   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.672402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.672402   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.672951   13512 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0328 00:09:13.672951   13512 api_server.go:141] control plane version: v1.29.3
	I0328 00:09:13.672951   13512 api_server.go:131] duration metric: took 12.4531ms to wait for apiserver health ...
	I0328 00:09:13.672951   13512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:09:13.805680   13512 request.go:629] Waited for 132.443ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:13.805680   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:13.805680   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:13.805813   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:13.805813   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:13.815249   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:13.824109   13512 system_pods.go:59] 17 kube-system pods found
	I0328 00:09:13.824188   13512 system_pods.go:61] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:09:13.824188   13512 system_pods.go:61] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:09:13.824264   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:09:13.824264   13512 system_pods.go:61] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:09:13.824307   13512 system_pods.go:61] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:09:13.824307   13512 system_pods.go:74] duration metric: took 151.3551ms to wait for pod list to return data ...
	I0328 00:09:13.824307   13512 default_sa.go:34] waiting for default service account to be created ...
	I0328 00:09:14.011087   13512 request.go:629] Waited for 186.5671ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:09:14.011362   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:09:14.011362   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.011362   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.011428   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.016313   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:09:14.016440   13512 default_sa.go:45] found service account: "default"
	I0328 00:09:14.016440   13512 default_sa.go:55] duration metric: took 192.132ms for default service account to be created ...
	I0328 00:09:14.016440   13512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 00:09:14.199130   13512 request.go:629] Waited for 182.5328ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:14.199308   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:09:14.199423   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.199423   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.199423   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.209138   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:09:14.217659   13512 system_pods.go:86] 17 kube-system pods found
	I0328 00:09:14.217711   13512 system_pods.go:89] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:09:14.217711   13512 system_pods.go:89] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:09:14.217711   13512 system_pods.go:126] duration metric: took 201.2702ms to wait for k8s-apps to be running ...
	I0328 00:09:14.217711   13512 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 00:09:14.232445   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:09:14.263498   13512 system_svc.go:56] duration metric: took 45.7864ms WaitForService to wait for kubelet
	I0328 00:09:14.263498   13512 kubeadm.go:576] duration metric: took 17.3145191s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:09:14.263498   13512 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:09:14.405125   13512 request.go:629] Waited for 141.3461ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes
	I0328 00:09:14.405340   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes
	I0328 00:09:14.405340   13512 round_trippers.go:469] Request Headers:
	I0328 00:09:14.405340   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:09:14.405404   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:09:14.415830   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:09:14.416431   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:09:14.416431   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:09:14.416431   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:09:14.416431   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:09:14.416431   13512 node_conditions.go:105] duration metric: took 152.932ms to run NodePressure ...
	I0328 00:09:14.416431   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:09:14.416431   13512 start.go:254] writing updated cluster config ...
	I0328 00:09:14.421256   13512 out.go:177] 
	I0328 00:09:14.434298   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:09:14.434298   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:09:14.451802   13512 out.go:177] * Starting "ha-170000-m03" control-plane node in "ha-170000" cluster
	I0328 00:09:14.453792   13512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 00:09:14.453792   13512 cache.go:56] Caching tarball of preloaded images
	I0328 00:09:14.454785   13512 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 00:09:14.454785   13512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 00:09:14.456893   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:09:14.458804   13512 start.go:360] acquireMachinesLock for ha-170000-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 00:09:14.458804   13512 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-170000-m03"
	I0328 00:09:14.458804   13512 start.go:93] Provisioning new machine with config: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:09:14.459791   13512 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0328 00:09:14.462807   13512 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 00:09:14.462807   13512 start.go:159] libmachine.API.Create for "ha-170000" (driver="hyperv")
	I0328 00:09:14.462807   13512 client.go:168] LocalClient.Create starting
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:09:14.463799   13512 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 00:09:14.464792   13512 main.go:141] libmachine: Decoding PEM data...
	I0328 00:09:14.464792   13512 main.go:141] libmachine: Parsing certificate...
	I0328 00:09:14.464792   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 00:09:16.606439   13512 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 00:09:16.606439   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:16.607241   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 00:09:18.581607   13512 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 00:09:18.582547   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:18.582547   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:09:20.215435   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:09:20.215556   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:20.215556   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:09:24.361024   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:09:24.361024   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:24.363440   13512 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 00:09:24.896896   13512 main.go:141] libmachine: Creating SSH key...
	I0328 00:09:24.969037   13512 main.go:141] libmachine: Creating VM...
	I0328 00:09:24.969037   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 00:09:28.120241   13512 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 00:09:28.121165   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:28.121436   13512 main.go:141] libmachine: Using switch "Default Switch"
	I0328 00:09:28.121565   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 00:09:30.042061   13512 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 00:09:30.042116   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:30.042257   13512 main.go:141] libmachine: Creating VHD
	I0328 00:09:30.042345   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 00:09:34.040978   13512 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AF977A12-6A66-403E-BF63-8FC75EA3BF37
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 00:09:34.041977   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:34.042033   13512 main.go:141] libmachine: Writing magic tar header
	I0328 00:09:34.042063   13512 main.go:141] libmachine: Writing SSH key tar header
	I0328 00:09:34.052783   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 00:09:37.354393   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:37.354629   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:37.354629   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd' -SizeBytes 20000MB
	I0328 00:09:40.075685   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:40.075801   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:40.075801   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 00:09:44.605015   13512 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-170000-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 00:09:44.605083   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:44.605248   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-170000-m03 -DynamicMemoryEnabled $false
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:47.023311   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-170000-m03 -Count 2
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:49.403984   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\boot2docker.iso'
	I0328 00:09:52.210166   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:52.210518   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:52.210634   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-170000-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\disk.vhd'
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:55.070745   13512 main.go:141] libmachine: Starting VM...
	I0328 00:09:55.070745   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-170000-m03
	I0328 00:09:58.341654   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:09:58.342255   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:09:58.342288   13512 main.go:141] libmachine: Waiting for host to start...
	I0328 00:09:58.342345   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:00.768092   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:00.768963   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:00.769067   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:03.439413   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:03.440106   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:04.447876   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:06.764931   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:09.459643   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:09.459643   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:10.470763   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:12.809573   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:12.809634   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:12.809716   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:15.521240   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:15.521301   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:16.527268   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:18.884599   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:18.885246   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:18.885394   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:21.583674   13512 main.go:141] libmachine: [stdout =====>] : 
	I0328 00:10:21.583674   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:22.589099   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:24.949863   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:24.950291   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:24.950291   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:27.701348   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:27.701348   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:27.701701   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:30.023831   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:30.023831   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:30.023831   13512 machine.go:94] provisionDockerMachine start ...
	I0328 00:10:30.024847   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:32.374175   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:35.136968   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:35.136968   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:35.142467   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:35.142529   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:35.143063   13512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 00:10:35.272914   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 00:10:35.272914   13512 buildroot.go:166] provisioning hostname "ha-170000-m03"
	I0328 00:10:35.272914   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:37.592909   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:40.329320   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:40.330061   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:40.335836   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:40.336409   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:40.336409   13512 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-170000-m03 && echo "ha-170000-m03" | sudo tee /etc/hostname
	I0328 00:10:40.494672   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-170000-m03
	
	I0328 00:10:40.494783   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:42.820519   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:42.820688   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:42.820760   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:45.648838   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:45.649744   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:45.655517   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:10:45.656049   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:10:45.656049   13512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-170000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-170000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-170000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 00:10:45.801301   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 00:10:45.801301   13512 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 00:10:45.801301   13512 buildroot.go:174] setting up certificates
	I0328 00:10:45.801301   13512 provision.go:84] configureAuth start
	I0328 00:10:45.801301   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:48.146975   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:48.147834   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:48.147941   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:50.952178   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:50.952719   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:50.952719   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:53.220018   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:53.220281   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:53.220384   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:10:55.971389   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:10:55.972214   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:55.972214   13512 provision.go:143] copyHostCerts
	I0328 00:10:55.972436   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 00:10:55.981827   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 00:10:55.981827   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 00:10:55.982587   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 00:10:55.983941   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 00:10:55.992781   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 00:10:55.992781   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 00:10:55.993320   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 00:10:55.994217   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 00:10:56.002427   13512 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 00:10:56.002427   13512 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 00:10:56.003436   13512 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 00:10:56.004418   13512 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-170000-m03 san=[127.0.0.1 172.28.227.17 ha-170000-m03 localhost minikube]
	I0328 00:10:56.128965   13512 provision.go:177] copyRemoteCerts
	I0328 00:10:56.143435   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 00:10:56.143435   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:10:58.412367   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:10:58.412437   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:10:58.412619   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:01.224447   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:01.224447   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:01.225355   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:01.327995   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1844144s)
	I0328 00:11:01.327995   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 00:11:01.328346   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 00:11:01.382551   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 00:11:01.383168   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 00:11:01.434334   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 00:11:01.434874   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0328 00:11:01.510845   13512 provision.go:87] duration metric: took 15.7094462s to configureAuth
	I0328 00:11:01.510845   13512 buildroot.go:189] setting minikube options for container-runtime
	I0328 00:11:01.527005   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:11:01.527158   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:03.956380   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:03.956449   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:03.956606   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:06.729489   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:06.729489   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:06.736658   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:06.737324   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:06.737324   13512 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 00:11:06.859725   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 00:11:06.859725   13512 buildroot.go:70] root file system type: tmpfs
	I0328 00:11:06.859725   13512 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 00:11:06.859725   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:09.162077   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:09.163077   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:09.163143   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:11.938069   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:11.938069   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:11.945302   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:11.945476   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:11.945476   13512 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.239.31"
	Environment="NO_PROXY=172.28.239.31,172.28.224.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 00:11:12.113989   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.239.31
	Environment=NO_PROXY=172.28.239.31,172.28.224.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 00:11:12.114099   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:14.400400   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:14.400574   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:14.400574   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:17.150210   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:17.150391   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:17.156150   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:17.156908   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:17.156908   13512 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 00:11:19.425896   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 00:11:19.425896   13512 machine.go:97] duration metric: took 49.4017594s to provisionDockerMachine
	I0328 00:11:19.425896   13512 client.go:171] duration metric: took 2m4.9623144s to LocalClient.Create
	I0328 00:11:19.425896   13512 start.go:167] duration metric: took 2m4.9623144s to libmachine.API.Create "ha-170000"
	I0328 00:11:19.425896   13512 start.go:293] postStartSetup for "ha-170000-m03" (driver="hyperv")
	I0328 00:11:19.425896   13512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 00:11:19.439712   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 00:11:19.439712   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:21.743398   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:24.457643   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:24.457643   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:24.462510   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:24.566324   13512 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1265798s)
	I0328 00:11:24.579256   13512 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 00:11:24.587473   13512 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 00:11:24.587473   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 00:11:24.588182   13512 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 00:11:24.589090   13512 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 00:11:24.589167   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 00:11:24.604039   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 00:11:24.623668   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 00:11:24.676946   13512 start.go:296] duration metric: took 5.2510174s for postStartSetup
	I0328 00:11:24.679915   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:26.967162   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:26.967162   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:26.967373   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:29.695411   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:29.695411   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:29.695411   13512 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\config.json ...
	I0328 00:11:29.698252   13512 start.go:128] duration metric: took 2m15.237623s to createHost
	I0328 00:11:29.698252   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:31.981641   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:31.981930   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:31.981930   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:34.699491   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:34.700390   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:34.706044   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:34.706799   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:34.706799   13512 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 00:11:34.833256   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711584694.842257460
	
	I0328 00:11:34.833369   13512 fix.go:216] guest clock: 1711584694.842257460
	I0328 00:11:34.833369   13512 fix.go:229] Guest: 2024-03-28 00:11:34.84225746 +0000 UTC Remote: 2024-03-28 00:11:29.6982526 +0000 UTC m=+605.454643701 (delta=5.14400486s)
	I0328 00:11:34.833511   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:37.106711   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:37.106711   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:37.107728   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:39.861297   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:39.861297   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:39.867067   13512 main.go:141] libmachine: Using SSH client type: native
	I0328 00:11:39.867221   13512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.17 22 <nil> <nil>}
	I0328 00:11:39.867221   13512 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711584694
	I0328 00:11:40.017756   13512 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 00:11:34 UTC 2024
	
	I0328 00:11:40.017756   13512 fix.go:236] clock set: Thu Mar 28 00:11:34 UTC 2024
	 (err=<nil>)
	I0328 00:11:40.017756   13512 start.go:83] releasing machines lock for "ha-170000-m03", held for 2m25.5580492s
	I0328 00:11:40.017982   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:42.307215   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:42.307857   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:42.307857   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:45.088912   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:45.088912   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:45.095105   13512 out.go:177] * Found network options:
	I0328 00:11:45.097506   13512 out.go:177]   - NO_PROXY=172.28.239.31,172.28.224.3
	W0328 00:11:45.099273   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.099273   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:11:45.101175   13512 out.go:177]   - NO_PROXY=172.28.239.31,172.28.224.3
	W0328 00:11:45.104173   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104173   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104471   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 00:11:45.104471   13512 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 00:11:45.107490   13512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 00:11:45.107490   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:45.117580   13512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 00:11:45.117580   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000-m03 ).state
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:47.455427   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:47.491755   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:47.491834   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:47.491892   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000-m03 ).networkadapters[0]).ipaddresses[0]
	I0328 00:11:50.362432   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:50.362507   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:50.362507   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:50.391889   13512 main.go:141] libmachine: [stdout =====>] : 172.28.227.17
	
	I0328 00:11:50.391970   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:50.392565   13512 sshutil.go:53] new ssh client: &{IP:172.28.227.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000-m03\id_rsa Username:docker}
	I0328 00:11:50.559763   13512 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.4420443s)
	W0328 00:11:50.559763   13512 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 00:11:50.559763   13512 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4521343s)
	I0328 00:11:50.573439   13512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 00:11:50.606513   13512 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 00:11:50.606513   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:11:50.606513   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:11:50.658201   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 00:11:50.694246   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 00:11:50.717044   13512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 00:11:50.729576   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 00:11:50.763493   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:11:50.797091   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 00:11:50.832209   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 00:11:50.868567   13512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 00:11:50.905268   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 00:11:50.940944   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 00:11:50.976374   13512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 00:11:51.010344   13512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 00:11:51.046659   13512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 00:11:51.081592   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:51.292532   13512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 00:11:51.327380   13512 start.go:494] detecting cgroup driver to use...
	I0328 00:11:51.342874   13512 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 00:11:51.384668   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:11:51.423634   13512 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 00:11:51.478177   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 00:11:51.517605   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:11:51.560271   13512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 00:11:51.627862   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 00:11:51.656380   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 00:11:51.709153   13512 ssh_runner.go:195] Run: which cri-dockerd
	I0328 00:11:51.728844   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 00:11:51.747791   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 00:11:51.795547   13512 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 00:11:52.020194   13512 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 00:11:52.244134   13512 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 00:11:52.244253   13512 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 00:11:52.293618   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:52.521802   13512 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 00:11:55.154311   13512 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6324928s)
	I0328 00:11:55.170396   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 00:11:55.212045   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:11:55.249933   13512 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 00:11:55.467978   13512 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 00:11:55.692741   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:55.917272   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 00:11:55.970058   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 00:11:56.011352   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:11:56.242569   13512 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 00:11:56.356668   13512 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 00:11:56.372747   13512 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 00:11:56.382175   13512 start.go:562] Will wait 60s for crictl version
	I0328 00:11:56.396011   13512 ssh_runner.go:195] Run: which crictl
	I0328 00:11:56.415458   13512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 00:11:56.499931   13512 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 00:11:56.509980   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:11:56.556128   13512 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 00:11:56.592284   13512 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 00:11:56.595794   13512 out.go:177]   - env NO_PROXY=172.28.239.31
	I0328 00:11:56.598814   13512 out.go:177]   - env NO_PROXY=172.28.239.31,172.28.224.3
	I0328 00:11:56.600724   13512 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 00:11:56.604724   13512 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 00:11:56.607642   13512 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 00:11:56.607642   13512 ip.go:210] interface addr: 172.28.224.1/20
	I0328 00:11:56.619667   13512 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 00:11:56.626331   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:11:56.650427   13512 mustload.go:65] Loading cluster: ha-170000
	I0328 00:11:56.650736   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:11:56.661799   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:11:58.911445   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:11:58.911445   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:11:58.911445   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:11:58.912164   13512 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000 for IP: 172.28.227.17
	I0328 00:11:58.912164   13512 certs.go:194] generating shared ca certs ...
	I0328 00:11:58.912164   13512 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:58.930691   13512 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 00:11:58.943675   13512 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 00:11:58.943675   13512 certs.go:256] generating profile certs ...
	I0328 00:11:58.944679   13512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\client.key
	I0328 00:11:58.944679   13512 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47
	I0328 00:11:58.944679   13512 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.239.31 172.28.224.3 172.28.227.17 172.28.239.254]
	I0328 00:11:59.094505   13512 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 ...
	I0328 00:11:59.094505   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47: {Name:mk775257f382591a7ec7000c86c060a0540ed0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:59.095850   13512 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47 ...
	I0328 00:11:59.095850   13512 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47: {Name:mk86d5c3ddc5fb09aa811e85a0cb8b7d8a26f6d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 00:11:59.096193   13512 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt.18645f47 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt
	I0328 00:11:59.108169   13512 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key.18645f47 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key
	I0328 00:11:59.123919   13512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key
	I0328 00:11:59.123919   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 00:11:59.124189   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 00:11:59.124472   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 00:11:59.124817   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 00:11:59.125077   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 00:11:59.125332   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 00:11:59.125467   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 00:11:59.125635   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 00:11:59.126270   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 00:11:59.128252   13512 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 00:11:59.128463   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 00:11:59.128818   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 00:11:59.129186   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 00:11:59.129524   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 00:11:59.130273   13512 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:11:59.130533   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 00:11:59.130533   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:12:01.430637   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:12:01.430637   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:01.430904   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:12:04.202047   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:12:04.202047   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:04.202931   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:12:04.309542   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0328 00:12:04.318209   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0328 00:12:04.359642   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0328 00:12:04.367426   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0328 00:12:04.405277   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0328 00:12:04.412071   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0328 00:12:04.445163   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0328 00:12:04.452995   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0328 00:12:04.494261   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0328 00:12:04.503478   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0328 00:12:04.541691   13512 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0328 00:12:04.548643   13512 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0328 00:12:04.572314   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 00:12:04.625706   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 00:12:04.677441   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 00:12:04.729212   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 00:12:04.777433   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0328 00:12:04.829262   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0328 00:12:04.880597   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 00:12:04.932278   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-170000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 00:12:04.985135   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 00:12:05.035832   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 00:12:05.084191   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 00:12:05.136133   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0328 00:12:05.170889   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0328 00:12:05.204980   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0328 00:12:05.237575   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0328 00:12:05.272293   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0328 00:12:05.307298   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0328 00:12:05.341734   13512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0328 00:12:05.389371   13512 ssh_runner.go:195] Run: openssl version
	I0328 00:12:05.412157   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 00:12:05.445342   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.453654   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.467295   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 00:12:05.495145   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 00:12:05.531582   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 00:12:05.567682   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.578846   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.595060   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 00:12:05.622555   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 00:12:05.660818   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 00:12:05.698921   13512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.706748   13512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.725025   13512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 00:12:05.749169   13512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 00:12:05.788934   13512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 00:12:05.796821   13512 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 00:12:05.796821   13512 kubeadm.go:928] updating node {m03 172.28.227.17 8443 v1.29.3 docker true true} ...
	I0328 00:12:05.797356   13512 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-170000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.227.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 00:12:05.797356   13512 kube-vip.go:111] generating kube-vip config ...
	I0328 00:12:05.809099   13512 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0328 00:12:05.837891   13512 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0328 00:12:05.838151   13512 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.239.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0328 00:12:05.854097   13512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 00:12:05.875439   13512 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0328 00:12:05.887961   13512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0328 00:12:05.912512   13512 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0328 00:12:05.912785   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:12:05.912785   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:12:05.926881   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:12:05.942065   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 00:12:05.943093   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 00:12:05.957621   13512 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:12:05.957621   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 00:12:05.957698   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 00:12:05.957698   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0328 00:12:05.957698   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0328 00:12:05.988206   13512 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 00:12:06.060728   13512 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 00:12:06.060728   13512 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0328 00:12:07.469413   13512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0328 00:12:07.490772   13512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0328 00:12:07.528169   13512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 00:12:07.571377   13512 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0328 00:12:07.627176   13512 ssh_runner.go:195] Run: grep 172.28.239.254	control-plane.minikube.internal$ /etc/hosts
	I0328 00:12:07.635302   13512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.239.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 00:12:07.676140   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:12:07.914597   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:12:07.950295   13512 host.go:66] Checking if "ha-170000" exists ...
	I0328 00:12:07.972451   13512 start.go:316] joinCluster: &{Name:ha-170000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-170000 Namespace:default APIServerHAVIP:172.28.239.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.239.31 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.224.3 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:
false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 00:12:07.972979   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0328 00:12:07.973377   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-170000 ).state
	I0328 00:12:10.230652   13512 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 00:12:10.231380   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:10.231488   13512 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-170000 ).networkadapters[0]).ipaddresses[0]
	I0328 00:12:13.002440   13512 main.go:141] libmachine: [stdout =====>] : 172.28.239.31
	
	I0328 00:12:13.002546   13512 main.go:141] libmachine: [stderr =====>] : 
	I0328 00:12:13.003041   13512 sshutil.go:53] new ssh client: &{IP:172.28.239.31 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-170000\id_rsa Username:docker}
	I0328 00:12:13.234374   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2613624s)
	I0328 00:12:13.234638   13512 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:12:13.234702   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnq6yu.t9p6crqq0gi1ikxs --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m03 --control-plane --apiserver-advertise-address=172.28.227.17 --apiserver-bind-port=8443"
	I0328 00:13:06.667336   13512 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cnq6yu.t9p6crqq0gi1ikxs --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-170000-m03 --control-plane --apiserver-advertise-address=172.28.227.17 --apiserver-bind-port=8443": (53.4323027s)
	I0328 00:13:06.667336   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0328 00:13:07.475438   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-170000-m03 minikube.k8s.io/updated_at=2024_03_28T00_13_07_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=ha-170000 minikube.k8s.io/primary=false
	I0328 00:13:07.701935   13512 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-170000-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0328 00:13:07.887530   13512 start.go:318] duration metric: took 59.914708s to joinCluster
	I0328 00:13:07.887530   13512 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.28.227.17 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 00:13:07.892192   13512 out.go:177] * Verifying Kubernetes components...
	I0328 00:13:07.888661   13512 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 00:13:07.906635   13512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 00:13:08.302425   13512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 00:13:08.355675   13512 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 00:13:08.356497   13512 kapi.go:59] client config for ha-170000: &rest.Config{Host:"https://172.28.239.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-170000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0328 00:13:08.356497   13512 kubeadm.go:477] Overriding stale ClientConfig host https://172.28.239.254:8443 with https://172.28.239.31:8443
	I0328 00:13:08.357766   13512 node_ready.go:35] waiting up to 6m0s for node "ha-170000-m03" to be "Ready" ...
	I0328 00:13:08.358047   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:08.358070   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:08.358070   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:08.358120   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:08.372644   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:13:08.862152   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:08.862152   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:08.862152   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:08.862152   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:08.866785   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:09.364540   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:09.364540   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:09.364540   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:09.364540   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:09.370302   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:09.865902   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:09.866124   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:09.866124   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:09.866124   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:09.871658   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:10.370197   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:10.370197   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:10.370197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:10.370197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:10.374804   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:10.376218   13512 node_ready.go:53] node "ha-170000-m03" has status "Ready":"False"
	I0328 00:13:10.860251   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:10.860334   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:10.860334   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:10.860334   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:10.867321   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.359448   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:11.359448   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.359448   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.359448   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.365925   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.865242   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:11.865296   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.865296   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.865296   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.870902   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.872628   13512 node_ready.go:49] node "ha-170000-m03" has status "Ready":"True"
	I0328 00:13:11.872720   13512 node_ready.go:38] duration metric: took 3.5148105s for node "ha-170000-m03" to be "Ready" ...
	I0328 00:13:11.872720   13512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:13:11.872900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:13:11.872961   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.872961   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.873013   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.901740   13512 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0328 00:13:11.912352   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.912352   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-5npq4
	I0328 00:13:11.912352   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.912352   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.912352   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.918426   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.919954   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.919954   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.920015   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.920015   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.924524   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:11.925756   13512 pod_ready.go:92] pod "coredns-76f75df574-5npq4" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.925828   13512 pod_ready.go:81] duration metric: took 13.4762ms for pod "coredns-76f75df574-5npq4" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.925828   13512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.925934   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-mgrhj
	I0328 00:13:11.926013   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.926013   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.926062   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.929665   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:11.931112   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.931112   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.931112   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.931112   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.936358   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.937895   13512 pod_ready.go:92] pod "coredns-76f75df574-mgrhj" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.937895   13512 pod_ready.go:81] duration metric: took 12.0675ms for pod "coredns-76f75df574-mgrhj" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.937895   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.937895   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000
	I0328 00:13:11.937895   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.937895   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.937895   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.943899   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:11.945289   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:11.945289   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.945289   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.945383   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.950241   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:11.951234   13512 pod_ready.go:92] pod "etcd-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.951234   13512 pod_ready.go:81] duration metric: took 13.3385ms for pod "etcd-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.951234   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.951234   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m02
	I0328 00:13:11.951234   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.951234   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.951234   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.958263   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:11.959402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:11.959402   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:11.959402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:11.959402   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:11.965047   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:11.965928   13512 pod_ready.go:92] pod "etcd-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:11.965928   13512 pod_ready.go:81] duration metric: took 14.6943ms for pod "etcd-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:11.966034   13512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.068614   13512 request.go:629] Waited for 102.2008ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m03
	I0328 00:13:12.068614   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/etcd-ha-170000-m03
	I0328 00:13:12.068614   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.068614   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.068614   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.073330   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:12.273672   13512 request.go:629] Waited for 197.9513ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:12.273854   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:12.274057   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.274057   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.274057   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.281351   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:12.283049   13512 pod_ready.go:92] pod "etcd-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:12.283099   13512 pod_ready.go:81] duration metric: took 317.0393ms for pod "etcd-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.283099   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.476566   13512 request.go:629] Waited for 193.4662ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:13:12.477043   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000
	I0328 00:13:12.477043   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.477101   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.477101   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.481450   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:12.667022   13512 request.go:629] Waited for 183.8337ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:12.667022   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:13:12.667022   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.667022   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.667022   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.673026   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:12.674023   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:12.674023   13512 pod_ready.go:81] duration metric: took 390.9219ms for pod "kube-apiserver-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.674023   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:12.872220   13512 request.go:629] Waited for 198.1526ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:13:12.872527   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m02
	I0328 00:13:12.872527   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:12.872598   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:12.872598   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:12.881609   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.077518   13512 request.go:629] Waited for 194.4658ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:13.077518   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:13:13.077518   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.077518   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.077518   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.082598   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:13.084080   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:13:13.084141   13512 pod_ready.go:81] duration metric: took 410.1159ms for pod "kube-apiserver-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:13.084185   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:13:13.267360   13512 request.go:629] Waited for 182.6691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.267497   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.267497   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.267497   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.267497   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.277120   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.473447   13512 request.go:629] Waited for 195.1512ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.473569   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.473569   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.473569   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.473640   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.483204   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:13.678963   13512 request.go:629] Waited for 92.9148ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.678963   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:13.678963   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.678963   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.679242   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.684756   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:13.866675   13512 request.go:629] Waited for 180.0707ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.866840   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:13.866869   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:13.866869   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:13.866869   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:13.871358   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:14.085938   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:14.085938   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.085938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.085938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.093499   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:14.271953   13512 request.go:629] Waited for 177.553ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.272166   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.272292   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.272292   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.272292   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.277077   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:14.598425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:14.598425   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.598425   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.598425   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.604991   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:14.678420   13512 request.go:629] Waited for 71.0218ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.678520   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:14.678520   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:14.678590   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:14.678590   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:14.684096   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:15.099433   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:15.099433   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.099554   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.099554   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.105494   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:15.107402   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:15.107402   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.107402   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.107487   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.111535   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:15.112678   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:15.595529   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:15.595606   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.595663   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.595663   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.599907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:15.602060   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:15.602115   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:15.602115   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:15.602141   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:15.608680   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:16.096291   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:16.096466   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.096466   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.096466   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.110051   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:13:16.111607   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:16.111607   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.111689   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.111689   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.116493   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:16.585627   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:16.585753   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.585753   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.585753   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.592878   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:16.594390   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:16.594447   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:16.594447   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:16.594447   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:16.598583   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.090469   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:17.090558   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.090622   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.090622   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.096234   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.097482   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:17.097536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.097536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.097591   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.101869   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:17.589866   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:17.589866   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.589866   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.589866   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.596067   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:17.598164   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:17.598164   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:17.598164   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:17.598164   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:17.605469   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:17.606285   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:18.093075   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:18.093075   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.093075   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.093075   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.102111   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:18.103351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:18.103407   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.103407   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.103478   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.108775   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:18.590854   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:18.590854   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.590854   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.590854   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.596713   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:18.598068   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:18.598068   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:18.598128   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:18.598128   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:18.602366   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:19.090274   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:19.090482   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.090482   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.090482   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.096914   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:19.098571   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:19.098644   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.098644   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.098644   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.102970   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:19.592433   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:19.592627   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.592627   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.592627   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.598705   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:19.599994   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:19.599994   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:19.599994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:19.599994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:19.604626   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:20.095379   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:20.095379   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.095379   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.095379   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.103977   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:20.105321   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:20.105321   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.105321   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.105321   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.109387   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:20.110381   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:20.599457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:20.599457   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.599457   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.599457   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.606916   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:20.607783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:20.607783   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:20.607783   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:20.607783   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:20.616105   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:21.086670   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:21.086743   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.086743   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.086743   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.092076   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:21.093883   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:21.093883   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.093883   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.093883   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.098668   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:21.588722   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:21.589164   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.589164   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.589164   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.594272   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:21.596383   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:21.596383   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:21.596383   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:21.596383   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:21.602780   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:22.093160   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:22.093265   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.093265   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.093265   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.099065   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:22.100084   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:22.100203   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.100203   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.100203   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.104891   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:22.593403   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:22.593403   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.593403   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.593403   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.599254   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:22.600050   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:22.600620   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:22.600620   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:22.600620   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:22.605973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:22.606730   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:23.093609   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:23.093609   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.093712   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.093712   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.099076   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:23.100708   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:23.100767   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.100767   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.100767   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.104303   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:23.594477   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:23.594477   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.594477   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.594477   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.600550   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:23.602660   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:23.602660   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:23.602803   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:23.602803   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:23.607853   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:24.094653   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:24.094653   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.094653   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.094653   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.103835   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:24.105603   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:24.105603   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.105603   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.105670   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.109936   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:24.585549   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:24.585549   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.585549   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.585549   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.592085   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:24.592085   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:24.592085   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:24.592085   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:24.592085   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:24.598735   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:25.089211   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:25.089211   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.089300   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.089300   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.095980   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:25.096861   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:25.096861   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.096861   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.096861   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.101682   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:25.102167   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:25.591982   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:25.592207   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.592207   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.592207   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.601020   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:25.602327   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:25.602365   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:25.602365   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:25.602430   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:25.606993   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:26.092924   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:26.092924   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.093204   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.093204   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.099632   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:26.101100   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:26.101100   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.101100   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.101100   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.105607   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:26.593455   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:26.593517   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.593517   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.593517   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.599233   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:26.601240   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:26.601240   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:26.601240   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:26.601240   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:26.605766   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.093999   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:27.094253   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.094253   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.094253   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.102209   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:27.104044   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:27.104044   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.104118   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.104118   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.108873   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.109647   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:27.594342   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:27.594342   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.594342   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.594342   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.599205   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:27.600271   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:27.600355   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:27.600355   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:27.600355   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:27.608801   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:28.096343   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:28.096343   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.096343   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.096343   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.100969   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.102671   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:28.102671   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.102671   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.102671   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.107267   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.597167   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:28.597167   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.597167   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.597410   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.601668   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:28.603437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:28.603437   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:28.603437   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:28.603437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:28.610973   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:29.085841   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:29.085921   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.085921   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.085921   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.091554   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:29.093151   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:29.093151   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.093151   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.093151   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.101731   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:29.589285   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:29.589285   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.589285   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.589285   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.596531   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:29.598487   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:29.598487   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:29.598487   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:29.598487   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:29.604093   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:29.605165   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:30.091759   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:30.091826   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.091826   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.091826   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.113928   13512 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0328 00:13:30.115225   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:30.115225   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.115225   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.115225   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.119654   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:30.592722   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:30.592722   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.592722   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.592722   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.599749   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:30.600737   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:30.600737   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:30.600737   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:30.600737   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:30.605831   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:31.098612   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:31.098612   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.098612   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.098612   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.103082   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:31.104480   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:31.104536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.104536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.104536   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.110459   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:31.597330   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:31.597330   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.597330   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.597330   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.603939   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:31.605356   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:31.605356   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:31.605356   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:31.605462   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:31.612948   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:31.612948   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:32.097666   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:32.097666   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.097666   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.097666   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.115920   13512 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0328 00:13:32.116874   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:32.116874   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.116874   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.116874   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.124488   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:32.596817   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:32.596817   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.596817   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.596817   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.602552   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:32.604431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:32.604431   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:32.604431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:32.604431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:32.609050   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:33.099353   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:33.099432   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.099432   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.099496   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.105748   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:33.107495   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:33.107495   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.107495   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.107495   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.112356   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:33.586766   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:33.587041   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.587041   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.587041   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.593820   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:33.595486   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:33.595536   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:33.595536   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:33.595581   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:33.599877   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:34.089974   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:34.089974   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.090040   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.090040   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.095711   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:34.098516   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:34.098571   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.098571   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.098571   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.101769   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:34.103846   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:34.592230   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:34.592230   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.592230   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.592522   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.598583   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:34.601026   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:34.601026   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:34.601026   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:34.601026   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:34.606533   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:35.089953   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:35.090199   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.090199   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.090199   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.099321   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:35.103091   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:35.103091   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.103091   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.103091   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.115395   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:13:35.585510   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:35.585592   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.585592   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.585592   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.592022   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:35.593438   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:35.593543   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:35.593543   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:35.593543   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:35.598430   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:36.089153   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:36.089153   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.089243   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.089243   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.095520   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:36.097195   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:36.097195   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.097195   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.097195   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.102701   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:36.589901   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:36.589978   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.589978   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.589978   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.598429   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:36.599372   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:36.599372   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:36.599372   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:36.599372   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:36.605227   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:36.605884   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:37.091158   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:37.091277   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.091277   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.091277   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.096719   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:37.098492   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:37.098492   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.098492   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.098556   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.104530   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:37.589655   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:37.589655   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.589655   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.589655   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.597876   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:37.599792   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:37.599896   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:37.599896   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:37.599978   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:37.606753   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:38.087351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:38.087351   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.087351   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.087351   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.093869   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:38.095305   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:38.095381   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.095381   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.095381   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.099205   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:38.590543   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:38.590543   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.590543   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.590543   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.596434   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:38.598503   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:38.598580   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:38.598580   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:38.598580   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:38.603808   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:39.091435   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:39.091435   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.091636   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.091636   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.097959   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:39.099580   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:39.099580   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.099580   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.099580   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.107647   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:39.108642   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:39.592076   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:39.592421   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.592421   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.592421   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.597686   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:39.599328   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:39.599494   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:39.599494   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:39.599494   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:39.607241   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:40.092724   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:40.092724   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.092724   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.092724   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.099137   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:40.100308   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:40.100308   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.100308   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.100308   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.107346   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:40.591297   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:40.591297   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.591492   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.591492   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.595887   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:40.598010   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:40.598010   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:40.598010   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:40.598010   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:40.602243   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:41.095829   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:41.096166   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.096166   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.096166   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.103258   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:41.104425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:41.104482   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.104482   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.104482   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.108659   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:41.109987   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:41.596182   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:41.596182   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.596369   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.596369   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.602785   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:41.604457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:41.604590   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:41.604590   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:41.604590   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:41.609440   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:42.098415   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:42.098530   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.098530   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.098530   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.102991   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:42.104551   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:42.104551   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.104611   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.104611   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.108428   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:42.597328   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:42.597328   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.597328   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.597328   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.603340   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:42.604367   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:42.604367   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:42.604367   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:42.604367   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:42.608710   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.100254   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:43.100254   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.100254   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.100254   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.106665   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:43.108085   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:43.108085   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.108158   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.108158   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.113114   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.113967   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:43.585631   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:43.585791   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.585876   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.585876   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.590397   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:43.591604   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:43.591604   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:43.591604   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:43.591604   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:43.595215   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:44.085621   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:44.085680   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.085680   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.085680   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.090000   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:44.090998   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:44.090998   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.090998   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.090998   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.095084   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:44.589132   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:44.589374   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.589374   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.589374   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.594889   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:44.595959   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:44.595959   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:44.596018   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:44.596018   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:44.604495   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:45.089720   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:45.089720   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.089720   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.089720   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.095312   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:45.096849   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:45.096849   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.096849   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.096849   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.101221   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:45.593132   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:45.593132   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.593132   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.593132   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.599705   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:45.599705   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:45.599705   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:45.599705   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:45.599705   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:45.608870   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:13:45.609825   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:46.099774   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:46.099887   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.099887   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.099887   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.106307   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:46.107645   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:46.107645   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.107645   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.107645   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.115071   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:46.589304   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:46.589304   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.589304   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.589304   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.599380   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:13:46.602460   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:46.602460   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:46.602460   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:46.602460   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:46.611324   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:47.084809   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:47.084886   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.084931   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.084931   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.089927   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:47.091312   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:47.091312   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.091312   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.091312   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.095907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:47.592528   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:47.592528   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.592528   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.592528   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.598175   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:47.600012   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:47.600012   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:47.600012   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:47.600012   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:47.605113   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:48.094213   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:48.094213   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.094213   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.094213   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.099931   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:48.100987   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:48.101062   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.101062   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.101062   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.105263   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:48.106717   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:48.594129   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:48.594216   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.594216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.594216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.600857   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:48.601963   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:48.601963   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:48.601963   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:48.601963   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:48.605663   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:49.094917   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:49.095103   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.095103   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.095103   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.103278   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:49.104457   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:49.104457   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.104457   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.104457   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.109327   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:49.594648   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:49.594771   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.594771   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.594771   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.600116   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:49.601637   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:49.601637   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:49.601637   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:49.601637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:49.606910   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:50.096535   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:50.096535   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.096535   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.096535   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.102017   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:50.103500   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:50.103587   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.103587   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.103587   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.107915   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:50.108455   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:50.596075   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:50.596075   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.596367   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.596367   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.601844   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:50.602775   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:50.602849   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:50.602849   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:50.602849   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:50.607727   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:51.092567   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:51.092567   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.092567   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.092567   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.097986   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:51.098604   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:51.098604   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.098604   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.098604   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.103235   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:51.592952   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:51.593007   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.593007   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.593007   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.598607   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:51.600299   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:51.600299   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:51.600299   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:51.600299   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:51.604078   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:52.095221   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:52.095221   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.095221   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.095437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.102465   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:52.103778   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:52.103986   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.103986   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.103986   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.108763   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:52.109628   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:52.594144   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:52.594144   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.594144   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.594346   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.598597   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:52.600543   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:52.600640   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:52.600640   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:52.600640   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:52.604958   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.094339   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:53.094339   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.094339   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.094339   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.099296   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.100411   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:53.100411   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.100411   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.100411   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.105239   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:53.597958   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:53.597958   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.597958   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.597958   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.606300   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:53.607688   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:53.607997   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:53.607997   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:53.607997   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:53.611290   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:54.099720   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:54.099812   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.099812   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.099812   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.106078   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:54.107358   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:54.107358   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.107358   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.107358   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.112576   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:54.113825   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:54.598711   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:54.598711   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.598711   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.598711   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.607687   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:54.608828   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:54.608828   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:54.608828   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:54.608828   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:54.612675   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:13:55.097959   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:55.098091   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.098091   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.098091   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.103151   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:55.105320   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:55.105320   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.105320   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.105320   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.113724   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:55.598832   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:55.598832   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.598832   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.598832   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.604545   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:55.606119   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:55.606119   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:55.606119   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:55.606119   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:55.610650   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:56.087083   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:56.087083   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.087083   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.087083   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.094684   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:13:56.096524   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:56.096598   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.096598   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.096598   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.101685   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:56.589938   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:56.589938   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.589938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.589938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.596364   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:56.598218   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:56.598218   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:56.598379   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:56.598379   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:56.603279   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:56.604554   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:57.091611   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:57.091611   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.091611   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.091611   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.097965   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:57.099393   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:57.099393   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.099524   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.099524   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.105147   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:57.592480   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:57.592564   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.592623   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.592623   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.598660   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:57.599263   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:57.599263   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:57.599263   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:57.599263   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:57.603925   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:58.095478   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:58.095478   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.095478   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.095478   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.100942   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:58.102763   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:58.102763   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.102763   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.102763   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.111483   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:13:58.594512   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:58.594785   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.594785   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.594785   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.598981   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:58.600499   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:58.600563   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:58.600563   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:58.600563   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:58.617171   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:13:58.618360   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:13:59.096697   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:59.096697   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.096697   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.096697   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.102942   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:13:59.105802   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:59.105868   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.105868   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.105868   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.110525   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:13:59.594699   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:13:59.594699   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.594699   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.594699   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.600380   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:13:59.602585   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:13:59.602585   13512 round_trippers.go:469] Request Headers:
	I0328 00:13:59.602585   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:13:59.602651   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:13:59.607146   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:00.092956   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:00.093196   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.093196   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.093299   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.099408   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:00.100644   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:00.100715   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.100715   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.100715   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.104970   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:00.591437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:00.591437   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.591437   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.591437   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.598204   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:00.599163   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:00.599223   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:00.599223   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:00.599223   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:00.603986   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:01.093921   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:01.093921   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.093921   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.093921   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.103422   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:01.104472   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:01.104534   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.104534   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.104534   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.108873   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:01.108873   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:01.585847   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:01.585847   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.585847   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.585847   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.593503   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:01.596592   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:01.596592   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:01.596592   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:01.596592   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:01.601210   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:02.094133   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:02.094188   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.094188   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.094188   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.099808   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:02.100997   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:02.101058   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.101058   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.101058   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.104872   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:02.587733   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:02.587853   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.587853   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.587853   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.596204   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:02.597878   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:02.597878   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:02.597938   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:02.597938   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:02.604889   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:03.091431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:03.091431   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.091431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.091431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.097083   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:03.099201   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:03.099257   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.099257   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.099257   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.103589   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:03.585946   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:03.585946   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.585946   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.585946   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.590548   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:03.592564   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:03.592646   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:03.592646   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:03.592646   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:03.597916   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:03.597916   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:04.091347   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:04.091347   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.091347   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.091347   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.097016   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:04.098804   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:04.098850   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.098850   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.098850   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.105001   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:04.593783   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:04.593919   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.593919   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.593969   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.599357   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:04.600435   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:04.600508   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:04.600508   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:04.600562   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:04.604791   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:05.087461   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:05.087461   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.087541   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.087541   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.100612   13512 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0328 00:14:05.102476   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:05.102476   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.102476   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.102476   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.111616   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:05.586138   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:05.586448   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.586448   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.586448   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.593703   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:05.595108   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:05.595108   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:05.595216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:05.595216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:05.599503   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:05.600818   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:06.090869   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:06.091173   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.091208   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.091208   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.098891   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:06.100088   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:06.100088   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.100088   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.100088   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.108590   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:06.594644   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:06.594644   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.594644   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.594644   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.603045   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:06.603979   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:06.603979   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:06.603979   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:06.603979   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:06.608612   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:07.096661   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:07.096661   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.096661   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.096661   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.103314   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:07.105052   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:07.105052   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.105052   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.105052   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.110165   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:07.595727   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:07.595789   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.595789   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.595848   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.601333   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:07.603421   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:07.603421   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:07.603421   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:07.603421   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:07.606713   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:07.608329   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:08.097410   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:08.097410   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.097410   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.097410   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.103066   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:08.104623   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:08.104623   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.104623   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.104623   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.109238   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:08.598431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:08.598431   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.598431   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.598431   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.605500   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:08.607284   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:08.607284   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:08.607284   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:08.607284   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:08.612125   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:09.088064   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:09.088064   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.088197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.088197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.095686   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:09.096689   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:09.096689   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.096689   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.096689   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.103673   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:09.597224   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:09.597224   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.597224   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.597224   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.603831   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:09.605771   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:09.605771   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:09.605771   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:09.605771   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:09.611065   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:09.611974   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:10.085546   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:10.085760   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.085836   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.085836   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.091620   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:10.093251   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:10.093309   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.093309   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.093309   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.097014   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:10.591317   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:10.591388   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.591388   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.591388   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.596237   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:10.597599   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:10.597599   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:10.597599   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:10.597599   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:10.602070   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:11.099422   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:11.099485   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.099485   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.099485   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.110387   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:14:11.111393   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:11.111393   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.111393   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.111393   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.115380   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:11.589210   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:11.589844   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.589844   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.589844   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.596277   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:11.597507   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:11.597564   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:11.597594   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:11.597594   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:11.603040   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:12.089506   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:12.089506   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.089995   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.089995   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.096321   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:12.098086   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:12.098172   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.098172   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.098172   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.110419   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:12.111291   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:12.591998   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:12.591998   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.591998   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.591998   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.597393   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:12.598765   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:12.598946   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:12.598946   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:12.598946   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:12.604091   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:13.092828   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:13.092828   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.092925   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.092925   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.097390   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:13.098401   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:13.098401   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.098495   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.098495   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.102207   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:13.591425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:13.591425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.591514   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.591514   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.597387   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:13.598913   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:13.598913   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:13.598977   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:13.598977   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:13.603750   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:14.096012   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:14.096012   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.096012   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.096012   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.103668   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:14.104896   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:14.104896   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.104896   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.104896   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.110150   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:14.585714   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:14.585714   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.585714   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.585714   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.597939   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:14.599229   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:14.599229   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:14.599229   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:14.599229   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:14.604815   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:14.604815   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:15.087288   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:15.087288   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.087355   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.087355   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.092243   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:15.093453   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:15.093513   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.093513   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.093513   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.097437   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:15.586287   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:15.586520   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.586520   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.586520   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.592678   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:15.593688   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:15.593688   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:15.593688   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:15.593688   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:15.598201   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:16.087111   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:16.087198   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.087273   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.087273   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.092695   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.093847   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:16.093847   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.093847   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.093847   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.099072   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.592426   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:16.592652   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.592652   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.592652   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.597973   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:16.599259   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:16.599259   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:16.599259   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:16.599259   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:16.604212   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:16.605163   13512 pod_ready.go:102] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:17.096990   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:17.096990   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.096990   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.096990   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.102726   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:17.104170   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:17.104232   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.104232   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.104232   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.114715   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:14:17.600701   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:17.600701   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.600701   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.600701   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.610583   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:17.612915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:17.612968   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:17.612968   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:17.612968   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:17.621379   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:18.087119   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-170000-m03
	I0328 00:14:18.087119   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.087119   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.087119   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.093045   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.094660   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.094721   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.094721   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.094721   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.099551   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.101385   13512 pod_ready.go:92] pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.101450   13512 pod_ready.go:81] duration metric: took 1m5.0168595s for pod "kube-apiserver-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.101537   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.101685   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000
	I0328 00:14:18.101685   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.101685   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.101741   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.107186   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.109056   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:18.109056   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.109056   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.109114   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.124848   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:14:18.125429   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.125429   13512 pod_ready.go:81] duration metric: took 23.892ms for pod "kube-controller-manager-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.125429   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.125602   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m02
	I0328 00:14:18.125602   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.125602   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.125602   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.130481   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.132385   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:18.132385   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.132385   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.132385   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.149200   13512 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0328 00:14:18.150809   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:18.150809   13512 pod_ready.go:81] duration metric: took 25.3802ms for pod "kube-controller-manager-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.150809   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:18.151045   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:18.151045   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.151045   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.151045   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.155587   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:18.157527   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.157527   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.157527   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.157637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.162819   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.651494   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:18.651494   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.651726   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.651726   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.657690   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:18.658897   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:18.658897   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:18.658897   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:18.658897   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:18.664776   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:19.152216   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:19.152216   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.152216   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.152216   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.159625   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:19.161117   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:19.161117   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.161117   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.161117   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.172572   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:14:19.653957   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:19.653957   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.653957   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.653957   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.659549   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:19.660770   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:19.660770   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:19.660770   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:19.660770   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:19.666360   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:20.155923   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:20.155923   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.156030   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.156030   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.165101   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:20.166110   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:20.166110   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.166110   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.166110   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.171480   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:20.172298   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:20.661425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:20.661425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.661425   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.661425   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.667470   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:20.669420   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:20.669503   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:20.669503   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:20.669503   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:20.674216   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:21.161321   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:21.161321   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.161321   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.161426   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.167223   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:21.168466   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:21.168466   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.168466   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.168466   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.194393   13512 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0328 00:14:21.651478   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:21.651672   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.651672   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.651737   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.657470   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:21.658852   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:21.658852   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:21.658852   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:21.658852   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:21.663659   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.153264   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:22.153264   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.153346   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.153346   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.159322   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:22.161080   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:22.161136   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.161136   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.161136   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.165925   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.654843   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:22.654923   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.654923   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.654923   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.661477   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:22.662412   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:22.662490   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:22.662490   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:22.662490   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:22.667213   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:22.668252   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:23.155756   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:23.155841   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.155841   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.155841   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.164182   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:23.164900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:23.164900   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.164900   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.164900   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.169495   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:23.658760   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:23.658760   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.658760   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.658760   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.663275   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:23.665264   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:23.665306   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:23.665306   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:23.665306   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:23.669580   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:24.159856   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:24.159856   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.159856   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.159856   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.165170   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:24.166786   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:24.166786   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.166786   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.166786   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.170257   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:24.661776   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:24.661844   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.661844   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.661844   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.667879   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:24.669590   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:24.669780   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:24.669780   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:24.669780   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:24.677170   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:24.678096   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:25.164310   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:25.164310   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.164310   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.164310   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.169937   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:25.170731   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:25.170731   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.170731   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.170731   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.185348   13512 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 00:14:25.665314   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:25.665388   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.665388   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.665388   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.671005   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:25.672282   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:25.672282   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:25.672282   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:25.672282   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:25.676722   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:26.157831   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:26.157920   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.157920   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.157920   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.164208   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:26.165174   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:26.165174   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.165174   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.165174   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.169654   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:26.657144   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:26.657144   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.657144   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.657144   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.663624   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:26.664510   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:26.664617   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:26.664617   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:26.664617   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:26.668954   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:27.159926   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:27.160056   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.160056   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.160056   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.166491   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:27.168532   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:27.168532   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.168577   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.168577   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.175543   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:27.176222   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:27.659965   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:27.660180   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.660180   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.660180   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.665328   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:27.667425   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:27.667425   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:27.667517   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:27.667517   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:27.675295   13512 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 00:14:28.162890   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:28.163186   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.163291   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.163291   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.168275   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:28.169966   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:28.169966   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.170042   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.170042   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.176562   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:28.663637   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:28.663637   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.663637   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.663637   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.667347   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:28.669385   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:28.669385   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:28.669385   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:28.669385   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:28.672986   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:29.155700   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:29.155700   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.155700   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.155700   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.160484   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:29.162309   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:29.162309   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.162309   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.162309   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.166907   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:29.660956   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:29.660956   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.660956   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.660956   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.666189   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:29.668826   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:29.668826   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:29.669055   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:29.669055   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:29.674833   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:29.675581   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:30.166521   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:30.166756   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.166756   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.166756   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.174007   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:30.175080   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:30.175141   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.175141   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.175141   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.179609   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:30.654360   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:30.654422   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.654422   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.654422   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.659781   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:30.660581   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:30.660581   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:30.660581   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:30.660581   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:30.666355   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:31.157679   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:31.157745   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.157745   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.157745   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.167037   13512 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 00:14:31.167651   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:31.167651   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.168194   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.168194   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.173337   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:31.663420   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:31.663420   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.663420   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.663420   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.670342   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:31.672214   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:31.672214   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:31.672214   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:31.672214   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:31.676583   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:31.677546   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:32.165160   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:32.165160   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.165160   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.165160   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.173573   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:32.175437   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:32.175605   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.175699   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.175699   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.182495   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:32.652995   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:32.652995   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.652995   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.652995   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.659076   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:32.659947   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:32.659947   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:32.659947   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:32.659947   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:32.664306   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:33.159352   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:33.159649   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.159649   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.159649   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.165106   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:33.167183   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:33.167183   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.167183   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.167248   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.173131   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:33.659417   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:33.659491   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.659491   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.659491   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.665762   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:33.666948   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:33.667026   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:33.667026   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:33.667026   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:33.672263   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.161514   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:34.161514   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.161514   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.161514   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.167954   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:34.169125   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:34.169125   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.169125   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.169125   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.174426   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.175921   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:34.664641   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:34.664745   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.664745   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.664745   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.670710   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:34.672496   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:34.672496   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:34.672555   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:34.672555   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:34.677044   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:35.166721   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:35.166721   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.166800   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.166800   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.173158   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:35.174092   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:35.174166   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.174166   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.174166   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.178980   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:35.653943   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:35.653943   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.653943   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.653943   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.659636   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:35.661654   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:35.661739   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:35.661739   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:35.661739   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:35.667054   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:36.154900   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:36.155267   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.155267   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.155267   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.160762   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:36.162512   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:36.162650   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.162650   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.162650   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.166778   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:36.655853   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:36.655853   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.655853   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.655853   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.662238   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:36.663533   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:36.663533   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:36.663533   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:36.663533   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:36.668121   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:36.668121   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:37.156431   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:37.156506   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.156506   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.156506   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.161092   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:37.162614   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:37.162614   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.162614   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.162614   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.166846   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:37.657585   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:37.657665   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.657665   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.657759   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.664329   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:37.665027   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:37.665207   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:37.665207   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:37.665207   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:37.669811   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.158449   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:38.158681   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.158681   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.158681   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.164353   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:38.165703   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:38.165703   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.165703   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.165703   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.170321   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.661390   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:38.661460   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.661531   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.661531   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.667045   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:38.668344   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:38.668344   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:38.668417   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:38.668417   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:38.672667   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:38.674288   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:39.160857   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:39.160857   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.160857   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.160857   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.169717   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:39.170509   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:39.170509   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.170509   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.170509   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.175675   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:39.663994   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:39.663994   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.663994   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.663994   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.670488   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:39.671554   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:39.671554   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:39.671554   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:39.671657   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:39.676825   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:40.153243   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:40.153587   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.153587   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.153587   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.159923   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:40.161636   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:40.161636   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.161636   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.161636   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.166222   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:40.657367   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:40.657426   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.657426   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.657426   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.664028   13512 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 00:14:40.664728   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:40.664728   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:40.664728   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:40.664728   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:40.677659   13512 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0328 00:14:40.677659   13512 pod_ready.go:102] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"False"
	I0328 00:14:41.162773   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-170000-m03
	I0328 00:14:41.162983   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.162983   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.162983   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.169404   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:41.169404   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.169404   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.170391   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.170391   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.174392   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.176391   13512 pod_ready.go:92] pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.176391   13512 pod_ready.go:81] duration metric: took 23.0254363s for pod "kube-controller-manager-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.176391   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-29dwg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.176391   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-29dwg
	I0328 00:14:41.176391   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.176391   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.176391   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.184856   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.185862   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.185862   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.185862   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.185862   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.190594   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.191219   13512 pod_ready.go:92] pod "kube-proxy-29dwg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.191287   13512 pod_ready.go:81] duration metric: took 14.8965ms for pod "kube-proxy-29dwg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.191287   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.191351   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w2z74
	I0328 00:14:41.191465   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.191465   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.191465   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.195173   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:41.196197   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:41.196197   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.196197   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.196197   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.201158   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.202236   13512 pod_ready.go:92] pod "kube-proxy-w2z74" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.202236   13512 pod_ready.go:81] duration metric: took 10.9482ms for pod "kube-proxy-w2z74" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.202236   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.202236   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wrvmg
	I0328 00:14:41.202236   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.202236   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.202236   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.207209   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.209086   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.209178   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.209178   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.209178   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.213581   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.213581   13512 pod_ready.go:92] pod "kube-proxy-wrvmg" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.213581   13512 pod_ready.go:81] duration metric: took 11.3456ms for pod "kube-proxy-wrvmg" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.213581   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.214601   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000
	I0328 00:14:41.214601   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.214601   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.214601   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.218663   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.219099   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000
	I0328 00:14:41.219099   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.219099   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.219099   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.222799   13512 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 00:14:41.224023   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.224023   13512 pod_ready.go:81] duration metric: took 10.4414ms for pod "kube-scheduler-ha-170000" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.224023   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.371126   13512 request.go:629] Waited for 146.7856ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:14:41.371182   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m02
	I0328 00:14:41.371182   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.371182   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.371182   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.376824   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:14:41.574646   13512 request.go:629] Waited for 196.133ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.574646   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m02
	I0328 00:14:41.574646   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.574646   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.574646   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.579949   13512 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 00:14:41.581028   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.581028   13512 pod_ready.go:81] duration metric: took 357.0028ms for pod "kube-scheduler-ha-170000-m02" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.581028   13512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.777600   13512 request.go:629] Waited for 195.9745ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m03
	I0328 00:14:41.777663   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-170000-m03
	I0328 00:14:41.777722   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.777722   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.777722   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.786114   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.965251   13512 request.go:629] Waited for 177.6128ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.965461   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes/ha-170000-m03
	I0328 00:14:41.965461   13512 round_trippers.go:469] Request Headers:
	I0328 00:14:41.965461   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:14:41.965591   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:14:41.974823   13512 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 00:14:41.975612   13512 pod_ready.go:92] pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace has status "Ready":"True"
	I0328 00:14:41.975612   13512 pod_ready.go:81] duration metric: took 394.3864ms for pod "kube-scheduler-ha-170000-m03" in "kube-system" namespace to be "Ready" ...
	I0328 00:14:41.975612   13512 pod_ready.go:38] duration metric: took 1m30.1022361s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 00:14:41.975737   13512 api_server.go:52] waiting for apiserver process to appear ...
	I0328 00:14:41.987127   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:14:42.015825   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:14:42.026489   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:14:42.052951   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:14:42.064829   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:14:42.094836   13512 logs.go:276] 0 containers: []
	W0328 00:14:42.094935   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:14:42.104927   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:14:42.130106   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:14:42.139720   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:14:42.168581   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:14:42.178080   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:14:42.204210   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:14:42.214594   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:14:42.239935   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:14:42.239935   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:14:42.239935   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:14:42.293277   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:14:42.293277   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:14:42.330955   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:14:42.331068   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:14:42.405581   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:14:42.405581   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:14:42.440376   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:14:42.440376   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:14:42.490372   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:14:42.490372   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:14:42.582240   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:14:42.582540   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:14:42.637380   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:14:42.637380   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:14:42.704589   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:14:42.704589   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:14:42.819957   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:14:42.820029   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:42.894219   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:42.896214   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:42.916233   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:14:42.916233   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:14:43.482822   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:14:43.482822   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:14:43.519577   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:14:43.519577   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:14:43.560324   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:43.560324   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:14:43.560324   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:14:43.560324   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:43.560324   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:43.560876   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:43.560876   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:43.560994   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:14:53.589366   13512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 00:14:53.621160   13512 api_server.go:72] duration metric: took 1m45.7329679s to wait for apiserver process to appear ...
	I0328 00:14:53.621233   13512 api_server.go:88] waiting for apiserver healthz status ...
	I0328 00:14:53.630167   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:14:53.657192   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:14:53.668105   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:14:53.697255   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:14:53.706793   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:14:53.733562   13512 logs.go:276] 0 containers: []
	W0328 00:14:53.733562   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:14:53.744194   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:14:53.780424   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:14:53.790235   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:14:53.817335   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:14:53.827888   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:14:53.862816   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:14:53.873285   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:14:53.903652   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:14:53.903652   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:14:53.904667   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:14:53.979101   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:14:53.979101   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:14:54.135110   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:14:54.135194   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:14:54.166983   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:14:54.167056   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:14:54.471357   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:14:54.471895   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:14:54.524447   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:14:54.524447   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:14:54.607843   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:14:54.607843   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:14:54.663538   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:14:54.663538   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:14:54.701283   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:14:54.701283   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.771387   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:54.773384   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:54.794917   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:14:54.795079   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:14:54.858201   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:14:54.858201   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:14:54.891191   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:14:54.891191   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:14:54.947186   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:14:54.947186   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:14:54.985352   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:54.985352   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:14:54.985352   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:14:54.985352   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:14:54.985352   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:14:54.985352   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:15:05.003275   13512 api_server.go:253] Checking apiserver healthz at https://172.28.239.31:8443/healthz ...
	I0328 00:15:05.012958   13512 api_server.go:279] https://172.28.239.31:8443/healthz returned 200:
	ok
	I0328 00:15:05.013209   13512 round_trippers.go:463] GET https://172.28.239.31:8443/version
	I0328 00:15:05.013209   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:05.013209   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:05.013209   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:05.015807   13512 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 00:15:05.015889   13512 api_server.go:141] control plane version: v1.29.3
	I0328 00:15:05.015889   13512 api_server.go:131] duration metric: took 11.3945847s to wait for apiserver health ...
	I0328 00:15:05.015889   13512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 00:15:05.027296   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 00:15:05.055587   13512 logs.go:276] 2 containers: [2d1fcac82c22 469d6ee62f5d]
	I0328 00:15:05.065882   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 00:15:05.091905   13512 logs.go:276] 1 containers: [876120cb9271]
	I0328 00:15:05.102535   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 00:15:05.131226   13512 logs.go:276] 0 containers: []
	W0328 00:15:05.131313   13512 logs.go:278] No container was found matching "coredns"
	I0328 00:15:05.143263   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 00:15:05.174284   13512 logs.go:276] 1 containers: [7c734b945c80]
	I0328 00:15:05.186070   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 00:15:05.217359   13512 logs.go:276] 1 containers: [9c877ca8a645]
	I0328 00:15:05.228478   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 00:15:05.264775   13512 logs.go:276] 2 containers: [1c949d54f393 1d96dd72244b]
	I0328 00:15:05.277680   13512 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 00:15:05.316910   13512 logs.go:276] 1 containers: [6dcd6df77ad0]
	I0328 00:15:05.316910   13512 logs.go:123] Gathering logs for kube-scheduler [7c734b945c80] ...
	I0328 00:15:05.316910   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c734b945c80"
	I0328 00:15:05.378581   13512 logs.go:123] Gathering logs for kube-controller-manager [1c949d54f393] ...
	I0328 00:15:05.378581   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c949d54f393"
	I0328 00:15:05.437209   13512 logs.go:123] Gathering logs for kube-controller-manager [1d96dd72244b] ...
	I0328 00:15:05.437209   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d96dd72244b"
	I0328 00:15:05.470250   13512 logs.go:123] Gathering logs for Docker ...
	I0328 00:15:05.470314   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 00:15:05.552045   13512 logs.go:123] Gathering logs for kubelet ...
	I0328 00:15:05.552045   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0328 00:15:05.628942   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.440946    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:15:05.628942   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441012    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
	W0328 00:15:05.629817   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:05.629817   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:05.631019   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:15:05.632026   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:15:05.632706   13512 logs.go:138] Found kubelet problem: Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:15:05.652638   13512 logs.go:123] Gathering logs for dmesg ...
	I0328 00:15:05.652638   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 00:15:05.683184   13512 logs.go:123] Gathering logs for describe nodes ...
	I0328 00:15:05.683184   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 00:15:06.013740   13512 logs.go:123] Gathering logs for kube-apiserver [469d6ee62f5d] ...
	I0328 00:15:06.013740   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 469d6ee62f5d"
	I0328 00:15:06.112578   13512 logs.go:123] Gathering logs for container status ...
	I0328 00:15:06.112578   13512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 00:15:06.244234   13512 logs.go:123] Gathering logs for kube-apiserver [2d1fcac82c22] ...
	I0328 00:15:06.244302   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d1fcac82c22"
	I0328 00:15:06.293482   13512 logs.go:123] Gathering logs for etcd [876120cb9271] ...
	I0328 00:15:06.293703   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 876120cb9271"
	I0328 00:15:06.352995   13512 logs.go:123] Gathering logs for kube-proxy [9c877ca8a645] ...
	I0328 00:15:06.352995   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c877ca8a645"
	I0328 00:15:06.388668   13512 logs.go:123] Gathering logs for kindnet [6dcd6df77ad0] ...
	I0328 00:15:06.388668   13512 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dcd6df77ad0"
	I0328 00:15:06.427957   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:15:06.427957   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0328 00:15:06.427957   13512 out.go:239] X Problems detected in kubelet:
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.441103    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.441122    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "ha-170000-m03" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.466071    2040 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{ha-170000-m03.17c0c5483cee70cc  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-170000-m03,UID:ha-170000-m03,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ha-170000-m03,},FirstTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,LastTimestamp:2024-03-28 00:12:52.451365068 +0000 UTC m=+0.802131598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-170000-m03,}"
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: W0328 00:12:52.467071    2040 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0328 00:15:06.427957   13512 out.go:239]   Mar 28 00:12:52 ha-170000-m03 kubelet[2040]: E0328 00:12:52.467127    2040 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 00:15:06.427957   13512 out.go:304] Setting ErrFile to fd 920...
	I0328 00:15:06.427957   13512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 00:15:16.452773   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:15:16.452870   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.452870   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.452870   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.463599   13512 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 00:15:16.475789   13512 system_pods.go:59] 24 kube-system pods found
	I0328 00:15:16.475789   13512 system_pods.go:61] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "etcd-ha-170000-m03" [f6eb8cee-0103-4081-b8b1-9599dea6fca3] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-bkl4c" [718fd32a-7015-4747-ae2d-cc39f0b83d0a] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-apiserver-ha-170000-m03" [0df204d3-193e-454b-97eb-288138c2cdab] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-controller-manager-ha-170000-m03" [79799961-0360-4b14-9dc4-c58065b02fd8] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-29dwg" [c2c9700a-d6b4-4c64-bc5e-7d434f2df188] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:15:16.475789   13512 system_pods.go:61] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-scheduler-ha-170000-m03" [7077722d-b2ca-4a1c-9b18-1a5bd8e541e2] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "kube-vip-ha-170000-m03" [09d0c667-4fa3-47a5-b680-370e05a735f2] Running
	I0328 00:15:16.476375   13512 system_pods.go:61] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:15:16.476375   13512 system_pods.go:74] duration metric: took 11.4604134s to wait for pod list to return data ...
	I0328 00:15:16.476375   13512 default_sa.go:34] waiting for default service account to be created ...
	I0328 00:15:16.476595   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/default/serviceaccounts
	I0328 00:15:16.476595   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.476595   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.476595   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.488285   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:15:16.488285   13512 default_sa.go:45] found service account: "default"
	I0328 00:15:16.488285   13512 default_sa.go:55] duration metric: took 11.91ms for default service account to be created ...
	I0328 00:15:16.488285   13512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 00:15:16.488285   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/namespaces/kube-system/pods
	I0328 00:15:16.488285   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.488285   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.488285   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.499844   13512 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 00:15:16.510720   13512 system_pods.go:86] 24 kube-system pods found
	I0328 00:15:16.510855   13512 system_pods.go:89] "coredns-76f75df574-5npq4" [b4a0463f-825d-4255-8704-6f41119d0930] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "coredns-76f75df574-mgrhj" [99d60631-1b51-4a6c-8819-5211bda5280d] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000" [845298f4-b42f-4a38-888d-eda92aba2483] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000-m02" [e37bcbf6-ea52-4df9-85e5-075621af992e] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "etcd-ha-170000-m03" [f6eb8cee-0103-4081-b8b1-9599dea6fca3] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-bkl4c" [718fd32a-7015-4747-ae2d-cc39f0b83d0a] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-n4x2r" [3b4b74d3-f82e-4337-a430-63ff92ca0efd] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kindnet-xf7sr" [32758e2b-9a9f-4f89-9e6e-e1594abc2019] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000" [0a3b4585-9f02-46b3-84cf-b4920d4dd1e3] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000-m02" [3c02a8b5-5251-48fb-9865-bbdd879129bd] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-apiserver-ha-170000-m03" [0df204d3-193e-454b-97eb-288138c2cdab] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000" [0062a6c2-2560-410f-b286-06409e50d26f] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m02" [4b136d09-f721-4103-b51b-ad58673ef4e2] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-controller-manager-ha-170000-m03" [79799961-0360-4b14-9dc4-c58065b02fd8] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-proxy-29dwg" [c2c9700a-d6b4-4c64-bc5e-7d434f2df188] Running
	I0328 00:15:16.510855   13512 system_pods.go:89] "kube-proxy-w2z74" [e88fc457-735e-4a67-89a1-223af2ea10d9] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-proxy-wrvmg" [a049745a-2586-4e19-b8a9-ca96fead5905] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000" [e11fffcf-8ff5-421d-9151-e00cd9a639a1] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000-m02" [4bb54c59-156a-42a0-bca0-fb43cd4cbe27] Running
	I0328 00:15:16.511009   13512 system_pods.go:89] "kube-scheduler-ha-170000-m03" [7077722d-b2ca-4a1c-9b18-1a5bd8e541e2] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000" [f958566a-56f8-436a-b5b4-8823c6cb2e2c] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000-m02" [0380ec5c-628c-429c-8f5f-36260dc029f4] Running
	I0328 00:15:16.511063   13512 system_pods.go:89] "kube-vip-ha-170000-m03" [09d0c667-4fa3-47a5-b680-370e05a735f2] Running
	I0328 00:15:16.511089   13512 system_pods.go:89] "storage-provisioner" [5586fd50-77c3-4335-8c64-1120c6a32034] Running
	I0328 00:15:16.511089   13512 system_pods.go:126] duration metric: took 22.8038ms to wait for k8s-apps to be running ...
	I0328 00:15:16.511089   13512 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 00:15:16.523843   13512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 00:15:16.555763   13512 system_svc.go:56] duration metric: took 44.612ms WaitForService to wait for kubelet
	I0328 00:15:16.555797   13512 kubeadm.go:576] duration metric: took 2m8.6674606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 00:15:16.555868   13512 node_conditions.go:102] verifying NodePressure condition ...
	I0328 00:15:16.555915   13512 round_trippers.go:463] GET https://172.28.239.31:8443/api/v1/nodes
	I0328 00:15:16.556041   13512 round_trippers.go:469] Request Headers:
	I0328 00:15:16.556041   13512 round_trippers.go:473]     Accept: application/json, */*
	I0328 00:15:16.556041   13512 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 00:15:16.561929   13512 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 00:15:16.562917   13512 node_conditions.go:123] node cpu capacity is 2
	I0328 00:15:16.562917   13512 node_conditions.go:105] duration metric: took 7.0489ms to run NodePressure ...
	I0328 00:15:16.562917   13512 start.go:240] waiting for startup goroutines ...
	I0328 00:15:16.562917   13512 start.go:254] writing updated cluster config ...
	I0328 00:15:16.579370   13512 ssh_runner.go:195] Run: rm -f paused
	I0328 00:15:16.790310   13512 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 00:15:16.792307   13512 out.go:177] * Done! kubectl is now configured to use "ha-170000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.632423224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.632859027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.638488363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.646889117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.648199226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.648306527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:05:10 ha-170000 dockerd[1340]: time="2024-03-28T00:05:10.649054031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702788182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702937683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.702953383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 dockerd[1340]: time="2024-03-28T00:15:57.703089083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:57 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:15:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9fe22e827be821309accf5ebe49a48347beae58ec00197836b05196adf11b6a0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 28 00:15:59 ha-170000 cri-dockerd[1222]: time="2024-03-28T00:15:59Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.431862212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.433544920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.433817821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:15:59 ha-170000 dockerd[1340]: time="2024-03-28T00:15:59.435603130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 00:17:04 ha-170000 dockerd[1333]: 2024/03/28 00:17:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:04 ha-170000 dockerd[1333]: 2024/03/28 00:17:04 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 00:17:05 ha-170000 dockerd[1333]: 2024/03/28 00:17:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b83fcd983b8f1       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   15 minutes ago      Running             busybox                   0                   9fe22e827be82       busybox-7fdf7869d9-jw6s4
	8246295778b70       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   5097b6406500f       coredns-76f75df574-mgrhj
	d8fea38581c75       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   c410fb61b51cf       coredns-76f75df574-5npq4
	c90ed8febdea8       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   16835a4276f7b       storage-provisioner
	bf50dc1255b37       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Running             kindnet-cni               0                   a8adc945f2124       kindnet-n4x2r
	44afe7b75e4ac       a1d263b5dc5b0                                                                                         26 minutes ago      Running             kube-proxy                0                   ee1d628428649       kube-proxy-w2z74
	99405c5a19ad9       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     27 minutes ago      Running             kube-vip                  0                   a790305a76458       kube-vip-ha-170000
	1ff184616e98c       6052a25da3f97                                                                                         27 minutes ago      Running             kube-controller-manager   0                   4ce90e8d8aa30       kube-controller-manager-ha-170000
	3d72f73e04bee       39f995c9f1996                                                                                         27 minutes ago      Running             kube-apiserver            0                   cc932594c4ded       kube-apiserver-ha-170000
	da083b3d9d734       8c390d98f50c0                                                                                         27 minutes ago      Running             kube-scheduler            0                   ad6e909ec407f       kube-scheduler-ha-170000
	b8c1ccb11ebd4       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   58cd9afeced59       etcd-ha-170000
	
	
	==> coredns [8246295778b7] <==
	[INFO] 10.244.2.2:52967 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000752s
	[INFO] 10.244.2.2:37242 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000575s
	[INFO] 10.244.1.2:35324 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000092101s
	[INFO] 10.244.0.4:59929 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000281301s
	[INFO] 10.244.0.4:57682 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244101s
	[INFO] 10.244.2.2:44472 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000213601s
	[INFO] 10.244.2.2:48809 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000200001s
	[INFO] 10.244.2.2:44642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000226002s
	[INFO] 10.244.2.2:54650 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000936s
	[INFO] 10.244.1.2:50510 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205601s
	[INFO] 10.244.1.2:40738 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.032875759s
	[INFO] 10.244.1.2:41252 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062s
	[INFO] 10.244.0.4:57610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000234001s
	[INFO] 10.244.0.4:57921 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000195801s
	[INFO] 10.244.2.2:38740 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135701s
	[INFO] 10.244.2.2:45709 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222601s
	[INFO] 10.244.2.2:59586 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000705s
	[INFO] 10.244.1.2:47697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001047s
	[INFO] 10.244.1.2:55138 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000248501s
	[INFO] 10.244.1.2:45737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137101s
	[INFO] 10.244.2.2:51738 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000294302s
	[INFO] 10.244.2.2:44699 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130901s
	[INFO] 10.244.1.2:51466 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156201s
	[INFO] 10.244.1.2:55077 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207701s
	[INFO] 10.244.1.2:34241 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0001024s
	
	
	==> coredns [d8fea38581c7] <==
	[INFO] 10.244.0.4:51781 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024278218s
	[INFO] 10.244.0.4:60752 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135s
	[INFO] 10.244.0.4:46184 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023788416s
	[INFO] 10.244.0.4:45507 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000223801s
	[INFO] 10.244.0.4:33072 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001496s
	[INFO] 10.244.2.2:37301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186601s
	[INFO] 10.244.2.2:54878 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000263301s
	[INFO] 10.244.2.2:46781 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061701s
	[INFO] 10.244.2.2:41724 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001956s
	[INFO] 10.244.1.2:34059 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000162301s
	[INFO] 10.244.1.2:46112 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108s
	[INFO] 10.244.1.2:39207 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000285202s
	[INFO] 10.244.1.2:47256 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066s
	[INFO] 10.244.1.2:47050 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057901s
	[INFO] 10.244.0.4:53037 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077101s
	[INFO] 10.244.0.4:53530 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001593s
	[INFO] 10.244.2.2:52086 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137301s
	[INFO] 10.244.1.2:44769 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000189401s
	[INFO] 10.244.0.4:39493 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001896s
	[INFO] 10.244.0.4:37692 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128s
	[INFO] 10.244.0.4:49225 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148201s
	[INFO] 10.244.0.4:59721 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106301s
	[INFO] 10.244.2.2:57268 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088s
	[INFO] 10.244.2.2:54394 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135701s
	[INFO] 10.244.1.2:58771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105s
	
	
	==> describe nodes <==
	Name:               ha-170000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T00_04_44_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:31:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:31:41 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:31:41 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:31:41 +0000   Thu, 28 Mar 2024 00:04:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:31:41 +0000   Thu, 28 Mar 2024 00:05:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.239.31
	  Hostname:    ha-170000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a770d0428be346a1a9c5e89c2b0227a7
	  System UUID:                9452b03b-f477-1b41-a3a5-ba63fc271926
	  Boot ID:                    286ae28b-54a4-4ee2-9e74-d085b0ae89c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-jw6s4             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 coredns-76f75df574-5npq4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-76f75df574-mgrhj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-170000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-n4x2r                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-170000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-170000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-w2z74                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-170000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-170000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-170000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-170000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-170000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-170000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-170000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-170000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26m                node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-170000 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-170000 event: Registered Node ha-170000 in Controller
	
	
	Name:               ha-170000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_08_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:08:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:31:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:26:41 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:26:41 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:26:41 +0000   Thu, 28 Mar 2024 00:08:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:26:41 +0000   Thu, 28 Mar 2024 00:09:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.224.3
	  Hostname:    ha-170000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f60ab19a10b942a88b67b15a72ab77d0
	  System UUID:                33d5c3c7-5f0d-1f4a-93fb-c3dc18b4a10f
	  Boot ID:                    9c1d6c65-61e5-4a10-9316-c218e1e8157f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-shnp5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 etcd-ha-170000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-xf7sr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	  kube-system                 kube-apiserver-ha-170000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-ha-170000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-wrvmg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-ha-170000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-vip-ha-170000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node ha-170000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node ha-170000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node ha-170000-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	  Normal  NodeReady                22m                kubelet          Node ha-170000-m02 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-170000-m02 event: Registered Node ha-170000-m02 in Controller
	
	
	Name:               ha-170000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_13_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:12:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:31:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:26:40 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:26:40 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:26:40 +0000   Thu, 28 Mar 2024 00:12:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:26:40 +0000   Thu, 28 Mar 2024 00:13:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.227.17
	  Hostname:    ha-170000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e7b215a7a6d4988b3521d84ebec4ac2
	  System UUID:                1ce6e39f-d5cc-944a-9944-0641d98a8c34
	  Boot ID:                    7461441a-4c10-4f12-8c56-536a4b743d7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-lb47v                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 etcd-ha-170000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-bkl4c                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-170000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-170000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-29dwg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-170000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-170000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node ha-170000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node ha-170000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-170000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-170000-m03 event: Registered Node ha-170000-m03 in Controller
	
	
	Name:               ha-170000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-170000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=ha-170000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T00_20_30_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 00:20:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-170000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 00:31:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 00:31:12 +0000   Thu, 28 Mar 2024 00:20:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 00:31:12 +0000   Thu, 28 Mar 2024 00:20:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 00:31:12 +0000   Thu, 28 Mar 2024 00:20:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 00:31:12 +0000   Thu, 28 Mar 2024 00:20:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.237.96
	  Hostname:    ha-170000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e89ab14fb36433390d6bc41ed7e7ca1
	  System UUID:                28f0b8c7-0914-ea47-bae5-7474edf23518
	  Boot ID:                    bd2e627c-f206-4c13-91ee-bc1a60b9a226
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xxmj6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-gtf89    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-170000-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-170000-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-170000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node ha-170000-m04 event: Registered Node ha-170000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-170000-m04 event: Registered Node ha-170000-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-170000-m04 event: Registered Node ha-170000-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-170000-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar28 00:03] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.205053] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[Mar28 00:04] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.113717] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.613584] systemd-fstab-generator[977]: Ignoring "noauto" option for root device
	[  +0.245296] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.257352] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +2.872152] systemd-fstab-generator[1174]: Ignoring "noauto" option for root device
	[  +0.237910] systemd-fstab-generator[1186]: Ignoring "noauto" option for root device
	[  +0.212945] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.311443] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[ +12.078812] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.114553] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.394309] systemd-fstab-generator[1527]: Ignoring "noauto" option for root device
	[  +7.907050] systemd-fstab-generator[1801]: Ignoring "noauto" option for root device
	[  +0.116312] kauditd_printk_skb: 73 callbacks suppressed
	[  +6.606910] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.398530] systemd-fstab-generator[2745]: Ignoring "noauto" option for root device
	[ +15.094612] kauditd_printk_skb: 17 callbacks suppressed
	[Mar28 00:05] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.201287] kauditd_printk_skb: 14 callbacks suppressed
	[Mar28 00:07] hrtimer: interrupt took 5469614 ns
	[Mar28 00:08] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [b8c1ccb11ebd] <==
	{"level":"info","ts":"2024-03-28T00:20:40.088911Z","caller":"traceutil/trace.go:171","msg":"trace[1783386712] transaction","detail":"{read_only:false; response_revision:2968; number_of_response:1; }","duration":"349.671131ms","start":"2024-03-28T00:20:39.739227Z","end":"2024-03-28T00:20:40.088898Z","steps":["trace[1783386712] 'process raft request'  (duration: 159.228289ms)","trace[1783386712] 'compare'  (duration: 189.644038ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T00:20:40.08899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:20:39.739207Z","time spent":"349.751132ms","remote":"127.0.0.1:37788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-170000-m04\" mod_revision:2891 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-170000-m04\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-170000-m04\" > >"}
	{"level":"warn","ts":"2024-03-28T00:20:40.090252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.637011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-170000-m04\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-03-28T00:20:40.090304Z","caller":"traceutil/trace.go:171","msg":"trace[640777162] range","detail":"{range_begin:/registry/minions/ha-170000-m04; range_end:; response_count:1; response_revision:2968; }","duration":"143.717811ms","start":"2024-03-28T00:20:39.946577Z","end":"2024-03-28T00:20:40.090295Z","steps":["trace[640777162] 'agreement among raft nodes before linearized reading'  (duration: 142.632306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:20:40.448469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.580158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1110"}
	{"level":"info","ts":"2024-03-28T00:20:40.449571Z","caller":"traceutil/trace.go:171","msg":"trace[140100542] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2968; }","duration":"194.957365ms","start":"2024-03-28T00:20:40.254585Z","end":"2024-03-28T00:20:40.449542Z","steps":["trace[140100542] 'range keys from in-memory index tree'  (duration: 191.95645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:20:40.450891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.09886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csistoragecapacities/\" range_end:\"/registry/csistoragecapacities0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T00:20:40.451637Z","caller":"traceutil/trace.go:171","msg":"trace[1322247477] range","detail":"{range_begin:/registry/csistoragecapacities/; range_end:/registry/csistoragecapacities0; response_count:0; response_revision:2968; }","duration":"194.861065ms","start":"2024-03-28T00:20:40.256757Z","end":"2024-03-28T00:20:40.451618Z","steps":["trace[1322247477] 'count revisions from in-memory index tree'  (duration: 192.666753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:20:40.451369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.427706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:435"}
	{"level":"info","ts":"2024-03-28T00:20:40.451845Z","caller":"traceutil/trace.go:171","msg":"trace[670209687] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2968; }","duration":"122.936008ms","start":"2024-03-28T00:20:40.328901Z","end":"2024-03-28T00:20:40.451837Z","steps":["trace[670209687] 'range keys from in-memory index tree'  (duration: 121.1901ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:20:40.603012Z","caller":"traceutil/trace.go:171","msg":"trace[1248340551] transaction","detail":"{read_only:false; response_revision:2969; number_of_response:1; }","duration":"145.533921ms","start":"2024-03-28T00:20:40.457447Z","end":"2024-03-28T00:20:40.602981Z","steps":["trace[1248340551] 'process raft request'  (duration: 145.101918ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:20:40.69791Z","caller":"traceutil/trace.go:171","msg":"trace[926418852] transaction","detail":"{read_only:false; response_revision:2970; number_of_response:1; }","duration":"234.067758ms","start":"2024-03-28T00:20:40.463817Z","end":"2024-03-28T00:20:40.697885Z","steps":["trace[926418852] 'process raft request'  (duration: 188.045131ms)","trace[926418852] 'compare'  (duration: 45.579625ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T00:20:46.001296Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"321cb4736f05787e","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"9.97204ms"}
	{"level":"warn","ts":"2024-03-28T00:20:46.001408Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"64c53a594821e4c","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"10.08834ms"}
	{"level":"info","ts":"2024-03-28T00:20:46.19696Z","caller":"traceutil/trace.go:171","msg":"trace[1721508714] transaction","detail":"{read_only:false; response_revision:2989; number_of_response:1; }","duration":"391.020937ms","start":"2024-03-28T00:20:45.805921Z","end":"2024-03-28T00:20:46.196942Z","steps":["trace[1721508714] 'process raft request'  (duration: 390.899236ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T00:20:46.197834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T00:20:45.805905Z","time spent":"391.262738ms","remote":"127.0.0.1:37788","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2987 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:370 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-03-28T00:20:46.199035Z","caller":"traceutil/trace.go:171","msg":"trace[813222456] linearizableReadLoop","detail":"{readStateIndex:3546; appliedIndex:3547; }","duration":"247.909128ms","start":"2024-03-28T00:20:45.951115Z","end":"2024-03-28T00:20:46.199024Z","steps":["trace[813222456] 'read index received'  (duration: 247.906028ms)","trace[813222456] 'applied index is now lower than readState.Index'  (duration: 2.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T00:20:46.199176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.054429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-170000-m04\" ","response":"range_response_count:1 size:3120"}
	{"level":"info","ts":"2024-03-28T00:20:46.199207Z","caller":"traceutil/trace.go:171","msg":"trace[739969534] range","detail":"{range_begin:/registry/minions/ha-170000-m04; range_end:; response_count:1; response_revision:2989; }","duration":"248.113829ms","start":"2024-03-28T00:20:45.951083Z","end":"2024-03-28T00:20:46.199197Z","steps":["trace[739969534] 'agreement among raft nodes before linearized reading'  (duration: 248.009629ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T00:24:36.470298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2721}
	{"level":"info","ts":"2024-03-28T00:24:36.517072Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2721,"took":"45.667629ms","hash":2045741560,"current-db-size-bytes":3485696,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-03-28T00:24:36.517518Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2045741560,"revision":2721,"compact-revision":1886}
	{"level":"info","ts":"2024-03-28T00:29:36.502191Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3566}
	{"level":"info","ts":"2024-03-28T00:29:36.556678Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3566,"took":"53.316369ms","hash":4237502608,"current-db-size-bytes":3485696,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2080768,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-03-28T00:29:36.556864Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4237502608,"revision":3566,"compact-revision":2721}
	
	
	==> kernel <==
	 00:31:46 up 29 min,  0 users,  load average: 0.38, 0.45, 0.44
	Linux ha-170000 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bf50dc1255b3] <==
	I0328 00:31:09.236256       1 main.go:250] Node ha-170000-m04 has CIDR [10.244.3.0/24] 
	I0328 00:31:19.248080       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:31:19.248158       1 main.go:227] handling current node
	I0328 00:31:19.248171       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:31:19.248178       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:31:19.248305       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:31:19.248317       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:31:19.248372       1 main.go:223] Handling node with IPs: map[172.28.237.96:{}]
	I0328 00:31:19.248378       1 main.go:250] Node ha-170000-m04 has CIDR [10.244.3.0/24] 
	I0328 00:31:29.257144       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:31:29.257268       1 main.go:227] handling current node
	I0328 00:31:29.257287       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:31:29.257296       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:31:29.257944       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:31:29.258101       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:31:29.258390       1 main.go:223] Handling node with IPs: map[172.28.237.96:{}]
	I0328 00:31:29.258428       1 main.go:250] Node ha-170000-m04 has CIDR [10.244.3.0/24] 
	I0328 00:31:39.269231       1 main.go:223] Handling node with IPs: map[172.28.239.31:{}]
	I0328 00:31:39.269441       1 main.go:227] handling current node
	I0328 00:31:39.269457       1 main.go:223] Handling node with IPs: map[172.28.224.3:{}]
	I0328 00:31:39.269465       1 main.go:250] Node ha-170000-m02 has CIDR [10.244.1.0/24] 
	I0328 00:31:39.269612       1 main.go:223] Handling node with IPs: map[172.28.227.17:{}]
	I0328 00:31:39.269626       1 main.go:250] Node ha-170000-m03 has CIDR [10.244.2.0/24] 
	I0328 00:31:39.269689       1 main.go:223] Handling node with IPs: map[172.28.237.96:{}]
	I0328 00:31:39.269696       1 main.go:250] Node ha-170000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3d72f73e04be] <==
	Trace[877059726]: ---"About to write a response" 648ms (00:12:43.598)
	Trace[877059726]: [648.35245ms] [648.35245ms] END
	E0328 00:12:53.392443       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0328 00:12:53.392586       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0328 00:12:53.392880       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.2µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0328 00:12:53.394480       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0328 00:12:53.395166       1 timeout.go:142] post-timeout activity - time-elapsed: 2.836719ms, PATCH "/api/v1/namespaces/default/events/ha-170000-m03.17c0c548415d66c5" result: <nil>
	E0328 00:16:09.562667       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 172.28.239.31:38782->172.28.239.31:10250: write: broken pipe
	I0328 00:20:22.269135       1 trace.go:236] Trace[292829345]: "Update" accept:application/json, */*,audit-id:d8b55040-fb3c-4058-809d-a1869a5c50cf,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Mar-2024 00:20:21.764) (total time: 504ms):
	Trace[292829345]: ["GuaranteedUpdate etcd3" audit-id:d8b55040-fb3c-4058-809d-a1869a5c50cf,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 504ms (00:20:21.764)
	Trace[292829345]:  ---"Txn call completed" 503ms (00:20:22.268)]
	Trace[292829345]: [504.482895ms] [504.482895ms] END
	I0328 00:20:22.822088       1 trace.go:236] Trace[200543052]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.28.239.31,type:*v1.Endpoints,resource:apiServerIPInfo (28-Mar-2024 00:20:22.012) (total time: 809ms):
	Trace[200543052]: ---"initial value restored" 636ms (00:20:22.648)
	Trace[200543052]: ---"Transaction prepared" 163ms (00:20:22.812)
	Trace[200543052]: [809.725705ms] [809.725705ms] END
	http2: server: error reading preface from client 172.28.237.96:46818: read tcp 172.28.239.254:8443->172.28.237.96:46818: read: connection reset by peer
	I0328 00:20:34.202829       1 trace.go:236] Trace[1774401541]: "Update" accept:application/json, */*,audit-id:04494202-2ddc-4e48-8f24-b3afa0de1e7b,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Mar-2024 00:20:33.651) (total time: 551ms):
	Trace[1774401541]: ["GuaranteedUpdate etcd3" audit-id:04494202-2ddc-4e48-8f24-b3afa0de1e7b,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 551ms (00:20:33.651)
	Trace[1774401541]:  ---"Txn call completed" 550ms (00:20:34.202)]
	Trace[1774401541]: [551.454729ms] [551.454729ms] END
	I0328 00:20:34.203353       1 trace.go:236] Trace[1355151674]: "Update" accept:application/json, */*,audit-id:5a48e4bb-4877-47d7-9e44-acaf22452e9d,client:172.28.239.31,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (28-Mar-2024 00:20:33.653) (total time: 549ms):
	Trace[1355151674]: ["GuaranteedUpdate etcd3" audit-id:5a48e4bb-4877-47d7-9e44-acaf22452e9d,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 549ms (00:20:33.653)
	Trace[1355151674]:  ---"Txn call completed" 548ms (00:20:34.202)]
	Trace[1355151674]: [549.937022ms] [549.937022ms] END
	
	
	==> kube-controller-manager [1ff184616e98] <==
	I0328 00:15:57.308193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="99.091325ms"
	I0328 00:15:57.310081       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-7fdf7869d9-6gfqj"
	I0328 00:15:57.341634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="33.221276ms"
	I0328 00:15:57.342073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.8µs"
	I0328 00:15:57.533626       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.72608ms"
	I0328 00:15:57.533808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="125.501µs"
	I0328 00:15:57.997591       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="192.001µs"
	I0328 00:15:59.737788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="22.225907ms"
	I0328 00:15:59.739007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="47.5µs"
	I0328 00:15:59.943559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="22.59271ms"
	I0328 00:15:59.943654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="34.1µs"
	I0328 00:16:00.639771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.05855ms"
	I0328 00:16:00.641028       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.6µs"
	E0328 00:20:28.979915       1 certificate_controller.go:146] Sync csr-grj45 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-grj45": the object has been modified; please apply your changes to the latest version and try again
	I0328 00:20:29.080542       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-170000-m04\" does not exist"
	I0328 00:20:29.134977       1 range_allocator.go:380] "Set node PodCIDR" node="ha-170000-m04" podCIDRs=["10.244.3.0/24"]
	I0328 00:20:29.185891       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-gtf89"
	I0328 00:20:29.187867       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xxmj6"
	I0328 00:20:29.293647       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-s9ql4"
	I0328 00:20:29.293793       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-fd9gp"
	I0328 00:20:29.507477       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-qs6p4"
	I0328 00:20:29.550751       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-z6znp"
	I0328 00:20:31.901467       1 event.go:376] "Event occurred" object="ha-170000-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-170000-m04 event: Registered Node ha-170000-m04 in Controller"
	I0328 00:20:31.934639       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-170000-m04"
	I0328 00:20:52.056425       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-170000-m04"
	
	
	==> kube-proxy [44afe7b75e4a] <==
	I0328 00:04:58.973139       1 server_others.go:72] "Using iptables proxy"
	I0328 00:04:58.988819       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.239.31"]
	I0328 00:04:59.088028       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 00:04:59.088060       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 00:04:59.088078       1 server_others.go:168] "Using iptables Proxier"
	I0328 00:04:59.093647       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 00:04:59.098135       1 server.go:865] "Version info" version="v1.29.3"
	I0328 00:04:59.098325       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 00:04:59.100225       1 config.go:188] "Starting service config controller"
	I0328 00:04:59.100347       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 00:04:59.100734       1 config.go:97] "Starting endpoint slice config controller"
	I0328 00:04:59.100997       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 00:04:59.102062       1 config.go:315] "Starting node config controller"
	I0328 00:04:59.102249       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 00:04:59.200882       1 shared_informer.go:318] Caches are synced for service config
	I0328 00:04:59.202008       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 00:04:59.202652       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [da083b3d9d73] <==
	W0328 00:04:40.729652       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 00:04:40.729808       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 00:04:40.732295       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 00:04:40.732696       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 00:04:40.777283       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 00:04:40.777485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 00:04:40.865405       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 00:04:40.865632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 00:04:42.494142       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0328 00:15:56.715070       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-shnp5\": pod busybox-7fdf7869d9-shnp5 is already assigned to node \"ha-170000-m02\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-shnp5" node="ha-170000-m02"
	E0328 00:15:56.716179       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod eea845c5-e86a-4f91-aa4c-190c2119b444(default/busybox-7fdf7869d9-shnp5) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.716468       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-shnp5\": pod busybox-7fdf7869d9-shnp5 is already assigned to node \"ha-170000-m02\"" pod="default/busybox-7fdf7869d9-shnp5"
	I0328 00:15:56.716618       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-shnp5" node="ha-170000-m02"
	E0328 00:15:56.758748       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lb47v\": pod busybox-7fdf7869d9-lb47v is already assigned to node \"ha-170000-m03\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-lb47v" node="ha-170000-m03"
	E0328 00:15:56.759336       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 930d4502-cdff-45dc-babd-2a6933e098f7(default/busybox-7fdf7869d9-lb47v) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.759643       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lb47v\": pod busybox-7fdf7869d9-lb47v is already assigned to node \"ha-170000-m03\"" pod="default/busybox-7fdf7869d9-lb47v"
	I0328 00:15:56.760022       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-lb47v" node="ha-170000-m03"
	E0328 00:15:56.765846       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jw6s4\": pod busybox-7fdf7869d9-jw6s4 is already assigned to node \"ha-170000\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-jw6s4" node="ha-170000"
	E0328 00:15:56.767099       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 84df2f13-7839-4bd8-8611-52ce5902ebb3(default/busybox-7fdf7869d9-jw6s4) wasn't assumed so cannot be forgotten"
	E0328 00:15:56.770015       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-jw6s4\": pod busybox-7fdf7869d9-jw6s4 is already assigned to node \"ha-170000\"" pod="default/busybox-7fdf7869d9-jw6s4"
	I0328 00:15:56.770380       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-jw6s4" node="ha-170000"
	E0328 00:20:29.262420       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fd9gp\": pod kube-proxy-fd9gp is already assigned to node \"ha-170000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fd9gp" node="ha-170000-m04"
	E0328 00:20:29.262555       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fd9gp\": pod kube-proxy-fd9gp is already assigned to node \"ha-170000-m04\"" pod="kube-system/kube-proxy-fd9gp"
	E0328 00:20:29.440238       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qs6p4\": pod kube-proxy-qs6p4 is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kube-proxy-qs6p4" node="ha-170000-m04"
	E0328 00:20:29.440340       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qs6p4\": pod kube-proxy-qs6p4 is being deleted, cannot be assigned to a host" pod="kube-system/kube-proxy-qs6p4"
	
	
	==> kubelet <==
	Mar 28 00:27:44 ha-170000 kubelet[2789]: E0328 00:27:44.032949    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:27:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:27:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:27:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:27:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:28:44 ha-170000 kubelet[2789]: E0328 00:28:44.026277    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:28:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:28:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:28:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:28:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:29:44 ha-170000 kubelet[2789]: E0328 00:29:44.027766    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:29:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:29:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:29:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:29:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:30:44 ha-170000 kubelet[2789]: E0328 00:30:44.025881    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:30:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:30:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:30:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:30:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 00:31:44 ha-170000 kubelet[2789]: E0328 00:31:44.025906    2789 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 00:31:44 ha-170000 kubelet[2789]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 00:31:44 ha-170000 kubelet[2789]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 00:31:44 ha-170000 kubelet[2789]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 00:31:44 ha-170000 kubelet[2789]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:31:37.593849    9252 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-170000 -n ha-170000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-170000 -n ha-170000: (13.2251455s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-170000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (583.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (60.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- sh -c "ping -c 1 172.28.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5518961s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:11:56.268843    6984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.224.1) from pod (busybox-7fdf7869d9-ct428): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- sh -c "ping -c 1 172.28.224.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- sh -c "ping -c 1 172.28.224.1": exit status 1 (10.5465366s)

                                                
                                                
-- stdout --
	PING 172.28.224.1 (172.28.224.1): 56 data bytes
	
	--- 172.28.224.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:12:07.402210    9724 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.224.1) from pod (busybox-7fdf7869d9-zgwm4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-240000 -n multinode-240000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-240000 -n multinode-240000: (13.0552963s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 logs -n 25: (9.1863894s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| ssh     | mount-start-2-133400 ssh -- ls                    | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:00 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-1-133400                           | mount-start-1-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:00 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-133400 ssh -- ls                    | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:00 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| stop    | -p mount-start-2-133400                           | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:00 UTC | 28 Mar 24 01:01 UTC |
	| start   | -p mount-start-2-133400                           | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:01 UTC | 28 Mar 24 01:03 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:03 UTC |                     |
	|         | --profile mount-start-2-133400 --v 0              |                      |                   |                |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |                |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |                |                     |                     |
	|         |                                                 0 |                      |                   |                |                     |                     |
	| ssh     | mount-start-2-133400 ssh -- ls                    | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:03 UTC | 28 Mar 24 01:03 UTC |
	|         | /minikube-host                                    |                      |                   |                |                     |                     |
	| delete  | -p mount-start-2-133400                           | mount-start-2-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:03 UTC | 28 Mar 24 01:04 UTC |
	| delete  | -p mount-start-1-133400                           | mount-start-1-133400 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:04 UTC | 28 Mar 24 01:04 UTC |
	| start   | -p multinode-240000                               | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:04 UTC | 28 Mar 24 01:11 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |                |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |                |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |                |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- apply -f                   | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- rollout                    | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | status deployment/busybox                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- get pods -o                | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- get pods -o                | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-ct428 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-zgwm4 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-ct428 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-zgwm4 --                       |                      |                   |                |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-ct428 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-zgwm4 -- nslookup              |                      |                   |                |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- get pods -o                | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC | 28 Mar 24 01:11 UTC |
	|         | busybox-7fdf7869d9-ct428                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:11 UTC |                     |
	|         | busybox-7fdf7869d9-ct428 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.28.224.1                         |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:12 UTC | 28 Mar 24 01:12 UTC |
	|         | busybox-7fdf7869d9-zgwm4                          |                      |                   |                |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |                |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |                |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |                |                     |                     |
	| kubectl | -p multinode-240000 -- exec                       | multinode-240000     | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:12 UTC |                     |
	|         | busybox-7fdf7869d9-zgwm4 -- sh                    |                      |                   |                |                     |                     |
	|         | -c ping -c 1 172.28.224.1                         |                      |                   |                |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:04:12
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:04:12.323300   12896 out.go:291] Setting OutFile to fd 976 ...
	I0328 01:04:12.323829   12896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:04:12.323829   12896 out.go:304] Setting ErrFile to fd 752...
	I0328 01:04:12.323829   12896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:04:12.355196   12896 out.go:298] Setting JSON to false
	I0328 01:04:12.361382   12896 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10513,"bootTime":1711577338,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0328 01:04:12.361608   12896 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 01:04:12.365652   12896 out.go:177] * [multinode-240000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0328 01:04:12.370679   12896 notify.go:220] Checking for updates...
	I0328 01:04:12.373187   12896 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:04:12.376191   12896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:04:12.379410   12896 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0328 01:04:12.382194   12896 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:04:12.385442   12896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:04:12.389639   12896 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:04:12.389639   12896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:04:18.080588   12896 out.go:177] * Using the hyperv driver based on user configuration
	I0328 01:04:18.083688   12896 start.go:297] selected driver: hyperv
	I0328 01:04:18.083688   12896 start.go:901] validating driver "hyperv" against <nil>
	I0328 01:04:18.083688   12896 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:04:18.136263   12896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0328 01:04:18.137766   12896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:04:18.137766   12896 cni.go:84] Creating CNI manager for ""
	I0328 01:04:18.137881   12896 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0328 01:04:18.137919   12896 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0328 01:04:18.137990   12896 start.go:340] cluster config:
	{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:04:18.137990   12896 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:04:18.140962   12896 out.go:177] * Starting "multinode-240000" primary control-plane node in "multinode-240000" cluster
	I0328 01:04:18.160932   12896 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:04:18.161627   12896 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0328 01:04:18.161771   12896 cache.go:56] Caching tarball of preloaded images
	I0328 01:04:18.162105   12896 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:04:18.162105   12896 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:04:18.162105   12896 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:04:18.162941   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json: {Name:mka23bf876aa4f5daf0195be0c8ae3e0dab544fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:04:18.164483   12896 start.go:360] acquireMachinesLock for multinode-240000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:04:18.164725   12896 start.go:364] duration metric: took 241.8µs to acquireMachinesLock for "multinode-240000"
	I0328 01:04:18.164940   12896 start.go:93] Provisioning new machine with config: &{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 01:04:18.164940   12896 start.go:125] createHost starting for "" (driver="hyperv")
	I0328 01:04:18.167655   12896 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 01:04:18.167705   12896 start.go:159] libmachine.API.Create for "multinode-240000" (driver="hyperv")
	I0328 01:04:18.167705   12896 client.go:168] LocalClient.Create starting
	I0328 01:04:18.168535   12896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 01:04:18.168677   12896 main.go:141] libmachine: Decoding PEM data...
	I0328 01:04:18.168677   12896 main.go:141] libmachine: Parsing certificate...
	I0328 01:04:18.168677   12896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 01:04:18.169221   12896 main.go:141] libmachine: Decoding PEM data...
	I0328 01:04:18.169385   12896 main.go:141] libmachine: Parsing certificate...
	I0328 01:04:18.169419   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 01:04:20.441051   12896 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 01:04:20.441051   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:20.441204   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 01:04:22.368344   12896 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 01:04:22.368406   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:22.368603   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 01:04:23.975055   12896 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 01:04:23.975707   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:23.975812   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 01:04:27.834822   12896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 01:04:27.834942   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:27.837806   12896 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 01:04:28.333004   12896 main.go:141] libmachine: Creating SSH key...
	I0328 01:04:28.987980   12896 main.go:141] libmachine: Creating VM...
	I0328 01:04:28.987980   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 01:04:32.046680   12896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 01:04:32.046969   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:32.047273   12896 main.go:141] libmachine: Using switch "Default Switch"
	I0328 01:04:32.047445   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 01:04:33.950737   12896 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 01:04:33.950737   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:33.950737   12896 main.go:141] libmachine: Creating VHD
	I0328 01:04:33.950737   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 01:04:37.909227   12896 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FE0D7810-5CA7-46C0-91DA-E34E6B5DE80A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 01:04:37.909227   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:37.910314   12896 main.go:141] libmachine: Writing magic tar header
	I0328 01:04:37.910363   12896 main.go:141] libmachine: Writing SSH key tar header
	I0328 01:04:37.920197   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 01:04:41.177969   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:41.178733   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:41.178733   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\disk.vhd' -SizeBytes 20000MB
	I0328 01:04:43.853054   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:43.853054   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:43.853054   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-240000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 01:04:47.731471   12896 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-240000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 01:04:47.731471   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:47.731471   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-240000 -DynamicMemoryEnabled $false
	I0328 01:04:50.140222   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:50.140222   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:50.140320   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-240000 -Count 2
	I0328 01:04:52.468285   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:52.469061   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:52.469126   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-240000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\boot2docker.iso'
	I0328 01:04:55.239803   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:55.240216   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:55.240216   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-240000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\disk.vhd'
	I0328 01:04:58.074630   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:04:58.075591   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:04:58.075591   12896 main.go:141] libmachine: Starting VM...
	I0328 01:04:58.075591   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000
	I0328 01:05:01.353092   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:05:01.353785   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:01.353785   12896 main.go:141] libmachine: Waiting for host to start...
	I0328 01:05:01.353889   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:03.725327   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:03.725327   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:03.725327   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:06.380171   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:05:06.380171   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:07.393039   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:09.691742   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:09.692489   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:09.692489   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:12.354864   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:05:12.355631   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:13.365314   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:15.703394   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:15.703499   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:15.703607   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:18.360989   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:05:18.362097   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:19.375146   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:21.760792   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:21.761615   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:21.761812   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:24.469760   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:05:24.469760   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:25.474707   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:27.837417   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:27.837417   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:27.837568   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:30.593748   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:30.594054   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:30.594054   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:32.956768   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:32.956942   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:32.957032   12896 machine.go:94] provisionDockerMachine start ...
	I0328 01:05:32.957137   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:35.406341   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:35.406954   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:35.407014   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:38.206565   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:38.207628   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:38.213673   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:05:38.225105   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:05:38.225105   12896 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:05:38.361651   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:05:38.361715   12896 buildroot.go:166] provisioning hostname "multinode-240000"
	I0328 01:05:38.361838   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:40.625609   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:40.625755   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:40.625755   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:43.351850   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:43.351850   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:43.358725   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:05:43.358794   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:05:43.358794   12896 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-240000 && echo "multinode-240000" | sudo tee /etc/hostname
	I0328 01:05:43.520019   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-240000
	
	I0328 01:05:43.520019   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:45.792238   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:45.792238   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:45.792238   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:48.467017   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:48.467017   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:48.473515   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:05:48.473515   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:05:48.473515   12896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-240000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-240000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-240000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:05:48.619488   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:05:48.619555   12896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 01:05:48.619555   12896 buildroot.go:174] setting up certificates
	I0328 01:05:48.619555   12896 provision.go:84] configureAuth start
	I0328 01:05:48.619555   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:50.911177   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:50.911177   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:50.911839   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:53.631598   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:53.631598   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:53.632247   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:05:55.898315   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:05:55.898586   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:55.898586   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:05:58.579761   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:05:58.579761   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:05:58.579761   12896 provision.go:143] copyHostCerts
	I0328 01:05:58.580572   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 01:05:58.580572   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 01:05:58.580572   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 01:05:58.581793   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 01:05:58.583414   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 01:05:58.583671   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 01:05:58.583671   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 01:05:58.584115   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 01:05:58.585051   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 01:05:58.585348   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 01:05:58.585348   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 01:05:58.585886   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 01:05:58.586661   12896 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-240000 san=[127.0.0.1 172.28.227.122 localhost minikube multinode-240000]
	I0328 01:05:58.752039   12896 provision.go:177] copyRemoteCerts
	I0328 01:05:58.767069   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:05:58.767146   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:01.015867   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:01.016699   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:01.016699   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:03.702233   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:03.703264   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:03.703848   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:06:03.809991   12896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0423687s)
	I0328 01:06:03.810075   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 01:06:03.810075   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:06:03.859673   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 01:06:03.860074   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0328 01:06:03.908003   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 01:06:03.908428   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0328 01:06:03.955168   12896 provision.go:87] duration metric: took 15.3355081s to configureAuth
	I0328 01:06:03.955168   12896 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:06:03.955465   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:06:03.955465   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:06.223305   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:06.223305   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:06.223704   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:08.888317   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:08.888317   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:08.897945   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:06:08.897945   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:06:08.897945   12896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 01:06:09.020121   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 01:06:09.020311   12896 buildroot.go:70] root file system type: tmpfs
	I0328 01:06:09.020442   12896 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 01:06:09.020534   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:11.278316   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:11.278627   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:11.278627   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:13.959780   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:13.960311   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:13.967362   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:06:13.967520   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:06:13.967520   12896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 01:06:14.120139   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 01:06:14.120139   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:16.375191   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:16.375548   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:16.375548   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:19.065336   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:19.066180   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:19.072094   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:06:19.072828   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:06:19.072828   12896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 01:06:21.267872   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 01:06:21.267872   12896 machine.go:97] duration metric: took 48.3105131s to provisionDockerMachine
	I0328 01:06:21.267872   12896 client.go:171] duration metric: took 2m3.0993394s to LocalClient.Create
	I0328 01:06:21.267872   12896 start.go:167] duration metric: took 2m3.0993394s to libmachine.API.Create "multinode-240000"
	I0328 01:06:21.267872   12896 start.go:293] postStartSetup for "multinode-240000" (driver="hyperv")
	I0328 01:06:21.267872   12896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:06:21.282477   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:06:21.282477   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:23.573542   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:23.573542   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:23.574141   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:26.233627   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:26.233627   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:26.234573   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:06:26.342872   12896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0603616s)
	I0328 01:06:26.358012   12896 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:06:26.365635   12896 command_runner.go:130] > NAME=Buildroot
	I0328 01:06:26.365635   12896 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 01:06:26.365635   12896 command_runner.go:130] > ID=buildroot
	I0328 01:06:26.365635   12896 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 01:06:26.365635   12896 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 01:06:26.365635   12896 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:06:26.365635   12896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 01:06:26.366400   12896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 01:06:26.367634   12896 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 01:06:26.367634   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 01:06:26.380509   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:06:26.402347   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 01:06:26.450882   12896 start.go:296] duration metric: took 5.1829746s for postStartSetup
	I0328 01:06:26.454347   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:28.778667   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:28.778667   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:28.779425   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:31.470666   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:31.470666   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:31.470666   12896 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:06:31.473949   12896 start.go:128] duration metric: took 2m13.3080341s to createHost
	I0328 01:06:31.474026   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:33.725842   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:33.725842   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:33.726837   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:36.389761   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:36.389761   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:36.397308   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:06:36.398045   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:06:36.398045   12896 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:06:36.519057   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711587996.532899287
	
	I0328 01:06:36.519336   12896 fix.go:216] guest clock: 1711587996.532899287
	I0328 01:06:36.519336   12896 fix.go:229] Guest: 2024-03-28 01:06:36.532899287 +0000 UTC Remote: 2024-03-28 01:06:31.4739498 +0000 UTC m=+139.350970301 (delta=5.058949487s)
	I0328 01:06:36.519450   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:38.794362   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:38.795056   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:38.795132   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:41.486831   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:41.486886   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:41.491598   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:06:41.492421   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.227.122 22 <nil> <nil>}
	I0328 01:06:41.492421   12896 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711587996
	I0328 01:06:41.631843   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 01:06:36 UTC 2024
	
	I0328 01:06:41.631843   12896 fix.go:236] clock set: Thu Mar 28 01:06:36 UTC 2024
	 (err=<nil>)
	I0328 01:06:41.631843   12896 start.go:83] releasing machines lock for "multinode-240000", held for 2m23.4660924s
	I0328 01:06:41.632455   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:43.889807   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:43.890225   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:43.890225   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:46.603336   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:46.604221   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:46.609384   12896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:06:46.609468   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:46.620257   12896 ssh_runner.go:195] Run: cat /version.json
	I0328 01:06:46.620257   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:06:48.931850   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:48.931850   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:06:48.932426   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:48.932426   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:48.932538   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:48.932596   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:06:51.716168   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:51.716865   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:51.717064   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:06:51.740268   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:06:51.740268   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:06:51.740676   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:06:51.876637   12896 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 01:06:51.877256   12896 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0328 01:06:51.877256   12896 ssh_runner.go:235] Completed: cat /version.json: (5.2569636s)
	I0328 01:06:51.877256   12896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2678372s)
	I0328 01:06:51.892910   12896 ssh_runner.go:195] Run: systemctl --version
	I0328 01:06:51.904538   12896 command_runner.go:130] > systemd 252 (252)
	I0328 01:06:51.904538   12896 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0328 01:06:51.918046   12896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 01:06:51.927079   12896 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0328 01:06:51.928221   12896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:06:51.941328   12896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:06:51.977954   12896 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0328 01:06:51.978060   12896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:06:51.978110   12896 start.go:494] detecting cgroup driver to use...
	I0328 01:06:51.978461   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:06:52.021906   12896 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0328 01:06:52.035032   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 01:06:52.068723   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 01:06:52.093072   12896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 01:06:52.107613   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 01:06:52.152654   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:06:52.192406   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 01:06:52.233701   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:06:52.269437   12896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:06:52.306060   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 01:06:52.342157   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 01:06:52.380428   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 01:06:52.417392   12896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:06:52.439244   12896 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 01:06:52.452322   12896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:06:52.489019   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:06:52.717871   12896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 01:06:52.754145   12896 start.go:494] detecting cgroup driver to use...
	I0328 01:06:52.767614   12896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 01:06:52.792323   12896 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0328 01:06:52.792781   12896 command_runner.go:130] > [Unit]
	I0328 01:06:52.792781   12896 command_runner.go:130] > Description=Docker Application Container Engine
	I0328 01:06:52.792781   12896 command_runner.go:130] > Documentation=https://docs.docker.com
	I0328 01:06:52.792781   12896 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0328 01:06:52.792781   12896 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0328 01:06:52.792781   12896 command_runner.go:130] > StartLimitBurst=3
	I0328 01:06:52.792853   12896 command_runner.go:130] > StartLimitIntervalSec=60
	I0328 01:06:52.792853   12896 command_runner.go:130] > [Service]
	I0328 01:06:52.792853   12896 command_runner.go:130] > Type=notify
	I0328 01:06:52.792853   12896 command_runner.go:130] > Restart=on-failure
	I0328 01:06:52.792853   12896 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0328 01:06:52.792853   12896 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0328 01:06:52.792853   12896 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0328 01:06:52.792853   12896 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0328 01:06:52.792853   12896 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0328 01:06:52.792853   12896 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0328 01:06:52.792853   12896 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0328 01:06:52.792853   12896 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0328 01:06:52.792853   12896 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0328 01:06:52.792853   12896 command_runner.go:130] > ExecStart=
	I0328 01:06:52.792853   12896 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0328 01:06:52.792853   12896 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0328 01:06:52.792853   12896 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0328 01:06:52.792853   12896 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0328 01:06:52.792853   12896 command_runner.go:130] > LimitNOFILE=infinity
	I0328 01:06:52.792853   12896 command_runner.go:130] > LimitNPROC=infinity
	I0328 01:06:52.792853   12896 command_runner.go:130] > LimitCORE=infinity
	I0328 01:06:52.792853   12896 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0328 01:06:52.792853   12896 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0328 01:06:52.792853   12896 command_runner.go:130] > TasksMax=infinity
	I0328 01:06:52.792853   12896 command_runner.go:130] > TimeoutStartSec=0
	I0328 01:06:52.792853   12896 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0328 01:06:52.792853   12896 command_runner.go:130] > Delegate=yes
	I0328 01:06:52.792853   12896 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0328 01:06:52.792853   12896 command_runner.go:130] > KillMode=process
	I0328 01:06:52.792853   12896 command_runner.go:130] > [Install]
	I0328 01:06:52.792853   12896 command_runner.go:130] > WantedBy=multi-user.target
	I0328 01:06:52.806927   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:06:52.848455   12896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:06:52.902673   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:06:52.942403   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:06:52.982320   12896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 01:06:53.051222   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:06:53.077474   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:06:53.113685   12896 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0328 01:06:53.127430   12896 ssh_runner.go:195] Run: which cri-dockerd
	I0328 01:06:53.134521   12896 command_runner.go:130] > /usr/bin/cri-dockerd
	I0328 01:06:53.146894   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 01:06:53.167523   12896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 01:06:53.215978   12896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 01:06:53.448379   12896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 01:06:53.694242   12896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 01:06:53.694242   12896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 01:06:53.750065   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:06:53.964278   12896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 01:06:56.512159   12896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5477643s)
	I0328 01:06:56.525288   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 01:06:56.562785   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:06:56.597870   12896 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 01:06:56.822669   12896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 01:06:57.041483   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:06:57.280698   12896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 01:06:57.325416   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:06:57.362192   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:06:57.572900   12896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 01:06:57.679895   12896 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 01:06:57.692071   12896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 01:06:57.700713   12896 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0328 01:06:57.700713   12896 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 01:06:57.700713   12896 command_runner.go:130] > Device: 0,22	Inode: 890         Links: 1
	I0328 01:06:57.700713   12896 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0328 01:06:57.700713   12896 command_runner.go:130] > Access: 2024-03-28 01:06:57.614378405 +0000
	I0328 01:06:57.700713   12896 command_runner.go:130] > Modify: 2024-03-28 01:06:57.614378405 +0000
	I0328 01:06:57.700713   12896 command_runner.go:130] > Change: 2024-03-28 01:06:57.618378413 +0000
	I0328 01:06:57.700713   12896 command_runner.go:130] >  Birth: -
	I0328 01:06:57.700713   12896 start.go:562] Will wait 60s for crictl version
	I0328 01:06:57.712694   12896 ssh_runner.go:195] Run: which crictl
	I0328 01:06:57.718819   12896 command_runner.go:130] > /usr/bin/crictl
	I0328 01:06:57.730853   12896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:06:57.813961   12896 command_runner.go:130] > Version:  0.1.0
	I0328 01:06:57.813961   12896 command_runner.go:130] > RuntimeName:  docker
	I0328 01:06:57.813961   12896 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0328 01:06:57.814995   12896 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 01:06:57.815074   12896 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 01:06:57.826526   12896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:06:57.860053   12896 command_runner.go:130] > 26.0.0
	I0328 01:06:57.872254   12896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:06:57.904447   12896 command_runner.go:130] > 26.0.0
	I0328 01:06:57.908137   12896 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 01:06:57.908310   12896 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 01:06:57.912614   12896 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 01:06:57.913237   12896 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 01:06:57.913237   12896 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 01:06:57.913237   12896 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 01:06:57.916564   12896 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 01:06:57.916564   12896 ip.go:210] interface addr: 172.28.224.1/20
	I0328 01:06:57.928479   12896 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 01:06:57.935078   12896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:06:57.958534   12896 kubeadm.go:877] updating cluster {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:06:57.958795   12896 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:06:57.969700   12896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:06:57.994170   12896 docker.go:685] Got preloaded images: 
	I0328 01:06:57.994170   12896 docker.go:691] registry.k8s.io/kube-apiserver:v1.29.3 wasn't preloaded
	I0328 01:06:58.008316   12896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 01:06:58.026858   12896 command_runner.go:139] > {"Repositories":{}}
	I0328 01:06:58.040432   12896 ssh_runner.go:195] Run: which lz4
	I0328 01:06:58.046516   12896 command_runner.go:130] > /usr/bin/lz4
	I0328 01:06:58.046829   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0328 01:06:58.060152   12896 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0328 01:06:58.068128   12896 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:06:58.068673   12896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0328 01:06:58.068884   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (367996162 bytes)
	I0328 01:07:00.533799   12896 docker.go:649] duration metric: took 2.4864526s to copy over tarball
	I0328 01:07:00.547117   12896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0328 01:07:09.241930   12896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6947541s)
	I0328 01:07:09.241930   12896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0328 01:07:09.309748   12896 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0328 01:07:09.328808   12896 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.29.3":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c":"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.29.3":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104":"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.29.3":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863":"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b
5bbe4f71784e392"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.29.3":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a":"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0328 01:07:09.329640   12896 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0328 01:07:09.378944   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:09.609696   12896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 01:07:12.561157   12896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9513848s)
	I0328 01:07:12.572561   12896 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:07:12.599433   12896 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:07:12.600359   12896 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0328 01:07:12.600359   12896 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:12.600487   12896 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0328 01:07:12.600487   12896 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:07:12.600487   12896 kubeadm.go:928] updating node { 172.28.227.122 8443 v1.29.3 docker true true} ...
	I0328 01:07:12.600732   12896 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-240000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.227.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:07:12.611450   12896 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 01:07:12.653867   12896 command_runner.go:130] > cgroupfs
	I0328 01:07:12.654697   12896 cni.go:84] Creating CNI manager for ""
	I0328 01:07:12.654697   12896 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 01:07:12.654766   12896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:07:12.654840   12896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.227.122 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-240000 NodeName:multinode-240000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.227.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.227.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:07:12.654872   12896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.227.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-240000"
	  kubeletExtraArgs:
	    node-ip: 172.28.227.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.227.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:07:12.669388   12896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:07:12.690373   12896 command_runner.go:130] > kubeadm
	I0328 01:07:12.690373   12896 command_runner.go:130] > kubectl
	I0328 01:07:12.690509   12896 command_runner.go:130] > kubelet
	I0328 01:07:12.690509   12896 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:07:12.702867   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:07:12.721414   12896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0328 01:07:12.753904   12896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:07:12.791813   12896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0328 01:07:12.843238   12896 ssh_runner.go:195] Run: grep 172.28.227.122	control-plane.minikube.internal$ /etc/hosts
	I0328 01:07:12.850141   12896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.227.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:07:12.885843   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:13.101777   12896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:13.135494   12896 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000 for IP: 172.28.227.122
	I0328 01:07:13.135994   12896 certs.go:194] generating shared ca certs ...
	I0328 01:07:13.136056   12896 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:13.136730   12896 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 01:07:13.137059   12896 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 01:07:13.137251   12896 certs.go:256] generating profile certs ...
	I0328 01:07:13.137955   12896 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.key
	I0328 01:07:13.138059   12896 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.crt with IP's: []
	I0328 01:07:13.323405   12896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.crt ...
	I0328 01:07:13.323405   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.crt: {Name:mk945bd0e02a1fda69c08fd67fbd6252360e9e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:13.325551   12896 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.key ...
	I0328 01:07:13.325551   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.key: {Name:mk53292903d61ea8ee997baa3245e32edb57f190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:13.326013   12896 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.9dfa13a5
	I0328 01:07:13.326013   12896 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.9dfa13a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.227.122]
	I0328 01:07:14.072755   12896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.9dfa13a5 ...
	I0328 01:07:14.072755   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.9dfa13a5: {Name:mk3e9561c5cd01edde8736942714774a4364e429 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:14.073219   12896 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.9dfa13a5 ...
	I0328 01:07:14.074224   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.9dfa13a5: {Name:mk2d5fe2e791e8480463fb5c6cd0212242e1343f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:14.074422   12896 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.9dfa13a5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt
	I0328 01:07:14.091467   12896 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.9dfa13a5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key
	I0328 01:07:14.091467   12896 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key
	I0328 01:07:14.092926   12896 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt with IP's: []
	I0328 01:07:14.443490   12896 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt ...
	I0328 01:07:14.443490   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt: {Name:mk0cf404cd00970f7c57fa986d91e8f48b82d1dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:14.445776   12896 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key ...
	I0328 01:07:14.445776   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key: {Name:mk5967a744f7de2c984fea4d16311c095ff48f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:14.446292   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 01:07:14.447440   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 01:07:14.447440   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 01:07:14.447834   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 01:07:14.448111   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 01:07:14.448111   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 01:07:14.448624   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 01:07:14.458759   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 01:07:14.459839   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 01:07:14.460315   12896 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 01:07:14.460315   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 01:07:14.460780   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 01:07:14.461124   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 01:07:14.461431   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 01:07:14.461994   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 01:07:14.462149   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 01:07:14.462409   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:07:14.462614   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 01:07:14.463494   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:07:14.518097   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 01:07:14.571963   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:07:14.619774   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 01:07:14.668325   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:07:14.720271   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:07:14.775398   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:07:14.822102   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:07:14.869554   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 01:07:14.923446   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:07:14.976069   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 01:07:15.022379   12896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:07:15.072616   12896 ssh_runner.go:195] Run: openssl version
	I0328 01:07:15.082750   12896 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 01:07:15.095479   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:07:15.131340   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:07:15.138910   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:07:15.139743   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:07:15.153537   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:07:15.161995   12896 command_runner.go:130] > b5213941
	I0328 01:07:15.177468   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:07:15.213408   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 01:07:15.248503   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 01:07:15.256146   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:07:15.256572   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:07:15.269365   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 01:07:15.279387   12896 command_runner.go:130] > 51391683
	I0328 01:07:15.291756   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 01:07:15.330013   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 01:07:15.363823   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 01:07:15.372630   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:07:15.372630   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:07:15.389119   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 01:07:15.398538   12896 command_runner.go:130] > 3ec20f2e
	I0328 01:07:15.415502   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:07:15.454712   12896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:07:15.460774   12896 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 01:07:15.462034   12896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 01:07:15.462468   12896 kubeadm.go:391] StartCluster: {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:07:15.473425   12896 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 01:07:15.510990   12896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 01:07:15.531917   12896 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0328 01:07:15.531917   12896 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0328 01:07:15.531917   12896 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0328 01:07:15.545498   12896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:07:15.585747   12896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:07:15.606895   12896 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0328 01:07:15.607257   12896 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0328 01:07:15.607257   12896 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0328 01:07:15.607257   12896 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:15.607475   12896 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:07:15.607475   12896 kubeadm.go:156] found existing configuration files:
	
	I0328 01:07:15.621455   12896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:07:15.640777   12896 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:15.640885   12896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:07:15.655589   12896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:07:15.688764   12896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:07:15.705549   12896 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:15.706487   12896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:07:15.722264   12896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:07:15.755374   12896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:07:15.775893   12896 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:15.776028   12896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:07:15.791269   12896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:07:15.822676   12896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:07:15.841385   12896 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:15.841385   12896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:07:15.856986   12896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:07:15.876071   12896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0328 01:07:16.402509   12896 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:16.402571   12896 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:07:31.419250   12896 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0328 01:07:31.419250   12896 command_runner.go:130] > [init] Using Kubernetes version: v1.29.3
	I0328 01:07:31.419389   12896 kubeadm.go:309] [preflight] Running pre-flight checks
	I0328 01:07:31.419389   12896 command_runner.go:130] > [preflight] Running pre-flight checks
	I0328 01:07:31.419586   12896 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:31.419643   12896 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0328 01:07:31.419899   12896 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:31.419957   12896 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0328 01:07:31.420169   12896 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.420169   12896 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0328 01:07:31.420295   12896 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:31.420295   12896 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:07:31.423808   12896 out.go:204]   - Generating certificates and keys ...
	I0328 01:07:31.424064   12896 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0328 01:07:31.424124   12896 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0328 01:07:31.424232   12896 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.424232   12896 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0328 01:07:31.424438   12896 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 01:07:31.424438   12896 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0328 01:07:31.424690   12896 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0328 01:07:31.424690   12896 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0328 01:07:31.424867   12896 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0328 01:07:31.424927   12896 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0328 01:07:31.424980   12896 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0328 01:07:31.424980   12896 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0328 01:07:31.424980   12896 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0328 01:07:31.424980   12896 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0328 01:07:31.424980   12896 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-240000] and IPs [172.28.227.122 127.0.0.1 ::1]
	I0328 01:07:31.424980   12896 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-240000] and IPs [172.28.227.122 127.0.0.1 ::1]
	I0328 01:07:31.425515   12896 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0328 01:07:31.425583   12896 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0328 01:07:31.425881   12896 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-240000] and IPs [172.28.227.122 127.0.0.1 ::1]
	I0328 01:07:31.425881   12896 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-240000] and IPs [172.28.227.122 127.0.0.1 ::1]
	I0328 01:07:31.425881   12896 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 01:07:31.425881   12896 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0328 01:07:31.426638   12896 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 01:07:31.426798   12896 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0328 01:07:31.426798   12896 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0328 01:07:31.426798   12896 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0328 01:07:31.426798   12896 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.426798   12896 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:07:31.426798   12896 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:31.426798   12896 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:07:31.427466   12896 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:31.427466   12896 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:07:31.427466   12896 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:31.427466   12896 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:07:31.427466   12896 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:31.427466   12896 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:07:31.427466   12896 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:31.427466   12896 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:07:31.427997   12896 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:31.427997   12896 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:07:31.428210   12896 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:31.431893   12896 out.go:204]   - Booting up control plane ...
	I0328 01:07:31.428210   12896 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:07:31.431893   12896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:31.431893   12896 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:07:31.431893   12896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:31.431893   12896 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:07:31.432852   12896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:31.432852   12896 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:07:31.432852   12896 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:31.432852   12896 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:07:31.432852   12896 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:31.432852   12896 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:07:31.432852   12896 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0328 01:07:31.432852   12896 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0328 01:07:31.432852   12896 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:31.432852   12896 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0328 01:07:31.433848   12896 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.004746 seconds
	I0328 01:07:31.433848   12896 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.004746 seconds
	I0328 01:07:31.433848   12896 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:31.433848   12896 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0328 01:07:31.433848   12896 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:31.433848   12896 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0328 01:07:31.433848   12896 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:31.433848   12896 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0328 01:07:31.434858   12896 kubeadm.go:309] [mark-control-plane] Marking the node multinode-240000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:31.434858   12896 command_runner.go:130] > [mark-control-plane] Marking the node multinode-240000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0328 01:07:31.434858   12896 kubeadm.go:309] [bootstrap-token] Using token: j7ro1s.uvi1j6n1ixdetawj
	I0328 01:07:31.436875   12896 out.go:204]   - Configuring RBAC rules ...
	I0328 01:07:31.434858   12896 command_runner.go:130] > [bootstrap-token] Using token: j7ro1s.uvi1j6n1ixdetawj
	I0328 01:07:31.437870   12896 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:31.437870   12896 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0328 01:07:31.437870   12896 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:31.437870   12896 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0328 01:07:31.437870   12896 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:31.437870   12896 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0328 01:07:31.438858   12896 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:31.438858   12896 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0328 01:07:31.438858   12896 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:31.438858   12896 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0328 01:07:31.438858   12896 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:31.438858   12896 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0328 01:07:31.438858   12896 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:31.438858   12896 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0328 01:07:31.438858   12896 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0328 01:07:31.439878   12896 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0328 01:07:31.439878   12896 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0328 01:07:31.439878   12896 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0328 01:07:31.439878   12896 kubeadm.go:309] 
	I0328 01:07:31.439878   12896 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:31.439878   12896 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0328 01:07:31.439878   12896 kubeadm.go:309] 
	I0328 01:07:31.439878   12896 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:31.439878   12896 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0328 01:07:31.439878   12896 kubeadm.go:309] 
	I0328 01:07:31.439878   12896 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0328 01:07:31.439878   12896 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0328 01:07:31.439878   12896 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:31.439878   12896 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0328 01:07:31.439878   12896 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:31.439878   12896 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0328 01:07:31.439878   12896 kubeadm.go:309] 
	I0328 01:07:31.440885   12896 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0328 01:07:31.440885   12896 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0328 01:07:31.440885   12896 kubeadm.go:309] 
	I0328 01:07:31.440885   12896 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:31.440885   12896 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0328 01:07:31.440885   12896 kubeadm.go:309] 
	I0328 01:07:31.440885   12896 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0328 01:07:31.440885   12896 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0328 01:07:31.440885   12896 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:31.440885   12896 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0328 01:07:31.440885   12896 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:31.440885   12896 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0328 01:07:31.440885   12896 kubeadm.go:309] 
	I0328 01:07:31.440885   12896 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:31.440885   12896 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0328 01:07:31.440885   12896 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0328 01:07:31.440885   12896 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0328 01:07:31.440885   12896 kubeadm.go:309] 
	I0328 01:07:31.440885   12896 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token j7ro1s.uvi1j6n1ixdetawj \
	I0328 01:07:31.440885   12896 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token j7ro1s.uvi1j6n1ixdetawj \
	I0328 01:07:31.440885   12896 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a \
	I0328 01:07:31.441872   12896 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a \
	I0328 01:07:31.441872   12896 kubeadm.go:309] 	--control-plane 
	I0328 01:07:31.441872   12896 command_runner.go:130] > 	--control-plane 
	I0328 01:07:31.441872   12896 kubeadm.go:309] 
	I0328 01:07:31.441872   12896 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:31.441872   12896 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0328 01:07:31.441872   12896 kubeadm.go:309] 
	I0328 01:07:31.441872   12896 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j7ro1s.uvi1j6n1ixdetawj \
	I0328 01:07:31.441872   12896 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token j7ro1s.uvi1j6n1ixdetawj \
	I0328 01:07:31.441872   12896 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a 
	I0328 01:07:31.441872   12896 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a 
	I0328 01:07:31.441872   12896 cni.go:84] Creating CNI manager for ""
	I0328 01:07:31.441872   12896 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0328 01:07:31.444922   12896 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 01:07:31.460862   12896 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 01:07:31.475507   12896 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0328 01:07:31.475719   12896 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0328 01:07:31.475719   12896 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0328 01:07:31.475719   12896 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 01:07:31.475791   12896 command_runner.go:130] > Access: 2024-03-28 01:05:26.959484100 +0000
	I0328 01:07:31.475791   12896 command_runner.go:130] > Modify: 2024-03-27 22:52:09.000000000 +0000
	I0328 01:07:31.475791   12896 command_runner.go:130] > Change: 2024-03-28 01:05:18.020000000 +0000
	I0328 01:07:31.475791   12896 command_runner.go:130] >  Birth: -
	I0328 01:07:31.476799   12896 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 01:07:31.476799   12896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 01:07:31.562075   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 01:07:32.418457   12896 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0328 01:07:32.418868   12896 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0328 01:07:32.418868   12896 command_runner.go:130] > serviceaccount/kindnet created
	I0328 01:07:32.418933   12896 command_runner.go:130] > daemonset.apps/kindnet created
	I0328 01:07:32.418933   12896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:07:32.435204   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:32.437212   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-240000 minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=multinode-240000 minikube.k8s.io/primary=true
	I0328 01:07:32.447541   12896 command_runner.go:130] > -16
	I0328 01:07:32.447541   12896 ops.go:34] apiserver oom_adj: -16
	I0328 01:07:32.635556   12896 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0328 01:07:32.648947   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:32.661961   12896 command_runner.go:130] > node/multinode-240000 labeled
	I0328 01:07:32.831940   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:33.153511   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:33.287482   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:33.660264   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:33.795918   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:34.163747   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:34.283512   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:34.651701   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:34.782708   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:35.160028   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:35.296023   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:35.652263   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:35.812434   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:36.157669   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:36.298920   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:36.661651   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:36.788962   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:37.152932   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:37.281643   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:37.654641   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:37.793644   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:38.162618   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:38.308323   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:38.651436   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:38.823447   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:39.155410   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:39.287167   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:39.663045   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:39.807638   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:40.151781   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.275477   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:40.658635   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:40.796798   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:41.154907   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.296661   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:41.656722   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:41.802202   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:42.150278   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.281873   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:42.650850   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:42.768115   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:43.157097   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.298644   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:43.663571   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:43.857166   12896 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0328 01:07:44.155070   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0328 01:07:44.412416   12896 command_runner.go:130] > NAME      SECRETS   AGE
	I0328 01:07:44.412461   12896 command_runner.go:130] > default   0         0s
	I0328 01:07:44.412529   12896 kubeadm.go:1107] duration metric: took 11.9934456s to wait for elevateKubeSystemPrivileges
	W0328 01:07:44.412590   12896 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0328 01:07:44.412635   12896 kubeadm.go:393] duration metric: took 28.9499737s to StartCluster
	I0328 01:07:44.412698   12896 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:44.412698   12896 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:07:44.415137   12896 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:07:44.418534   12896 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 01:07:44.418594   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0328 01:07:44.422560   12896 out.go:177] * Verifying Kubernetes components...
	I0328 01:07:44.418734   12896 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:07:44.418990   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:07:44.422586   12896 addons.go:69] Setting storage-provisioner=true in profile "multinode-240000"
	I0328 01:07:44.425827   12896 addons.go:234] Setting addon storage-provisioner=true in "multinode-240000"
	I0328 01:07:44.422586   12896 addons.go:69] Setting default-storageclass=true in profile "multinode-240000"
	I0328 01:07:44.425827   12896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-240000"
	I0328 01:07:44.425827   12896 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:07:44.427698   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:07:44.427698   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:07:44.440720   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:07:44.799869   12896 command_runner.go:130] > apiVersion: v1
	I0328 01:07:44.799998   12896 command_runner.go:130] > data:
	I0328 01:07:44.799998   12896 command_runner.go:130] >   Corefile: |
	I0328 01:07:44.799998   12896 command_runner.go:130] >     .:53 {
	I0328 01:07:44.799998   12896 command_runner.go:130] >         errors
	I0328 01:07:44.799998   12896 command_runner.go:130] >         health {
	I0328 01:07:44.799998   12896 command_runner.go:130] >            lameduck 5s
	I0328 01:07:44.799998   12896 command_runner.go:130] >         }
	I0328 01:07:44.799998   12896 command_runner.go:130] >         ready
	I0328 01:07:44.799998   12896 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0328 01:07:44.800153   12896 command_runner.go:130] >            pods insecure
	I0328 01:07:44.800253   12896 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0328 01:07:44.800298   12896 command_runner.go:130] >            ttl 30
	I0328 01:07:44.800298   12896 command_runner.go:130] >         }
	I0328 01:07:44.800298   12896 command_runner.go:130] >         prometheus :9153
	I0328 01:07:44.800298   12896 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0328 01:07:44.800298   12896 command_runner.go:130] >            max_concurrent 1000
	I0328 01:07:44.800298   12896 command_runner.go:130] >         }
	I0328 01:07:44.800298   12896 command_runner.go:130] >         cache 30
	I0328 01:07:44.800298   12896 command_runner.go:130] >         loop
	I0328 01:07:44.800298   12896 command_runner.go:130] >         reload
	I0328 01:07:44.800298   12896 command_runner.go:130] >         loadbalance
	I0328 01:07:44.800298   12896 command_runner.go:130] >     }
	I0328 01:07:44.800298   12896 command_runner.go:130] > kind: ConfigMap
	I0328 01:07:44.800298   12896 command_runner.go:130] > metadata:
	I0328 01:07:44.800298   12896 command_runner.go:130] >   creationTimestamp: "2024-03-28T01:07:31Z"
	I0328 01:07:44.800298   12896 command_runner.go:130] >   name: coredns
	I0328 01:07:44.800298   12896 command_runner.go:130] >   namespace: kube-system
	I0328 01:07:44.800298   12896 command_runner.go:130] >   resourceVersion: "271"
	I0328 01:07:44.800298   12896 command_runner.go:130] >   uid: 52af7e71-d822-445e-8306-e081a436b431
	I0328 01:07:44.800298   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.224.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0328 01:07:44.976101   12896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:07:45.352190   12896 command_runner.go:130] > configmap/coredns replaced
	I0328 01:07:45.352270   12896 start.go:948] {"host.minikube.internal": 172.28.224.1} host record injected into CoreDNS's ConfigMap
	I0328 01:07:45.353687   12896 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:07:45.353687   12896 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:07:45.355257   12896 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.227.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:07:45.355257   12896 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.227.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:07:45.356629   12896 cert_rotation.go:137] Starting client certificate rotation controller
	I0328 01:07:45.357410   12896 node_ready.go:35] waiting up to 6m0s for node "multinode-240000" to be "Ready" ...
	I0328 01:07:45.357580   12896 round_trippers.go:463] GET https://172.28.227.122:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0328 01:07:45.357658   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:45.357658   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:45.357658   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:45.357580   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:45.357658   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:45.357658   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:45.357658   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:45.369533   12896 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:07:45.369533   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:45.369533   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:45.369533   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:45.369533   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:45 GMT
	I0328 01:07:45.369533   12896 round_trippers.go:580]     Audit-Id: aeb3a276-e7f5-4a30-9cc8-6ff0f4561727
	I0328 01:07:45.369533   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:45.369533   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:45.370527   12896 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:07:45.370527   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:45.370527   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:45.370527   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:45.370527   12896 round_trippers.go:580]     Content-Length: 291
	I0328 01:07:45.370527   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:45 GMT
	I0328 01:07:45.370527   12896 round_trippers.go:580]     Audit-Id: b2e08705-0aed-454c-abf3-4be535634139
	I0328 01:07:45.370527   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:45.370527   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:45.370527   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:45.370527   12896 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"17bc3f37-1942-477c-bce3-5e4800f160b6","resourceVersion":"394","creationTimestamp":"2024-03-28T01:07:31Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0328 01:07:45.370527   12896 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"17bc3f37-1942-477c-bce3-5e4800f160b6","resourceVersion":"394","creationTimestamp":"2024-03-28T01:07:31Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0328 01:07:45.370527   12896 round_trippers.go:463] PUT https://172.28.227.122:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0328 01:07:45.370527   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:45.370527   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:45.370527   12896 round_trippers.go:473]     Content-Type: application/json
	I0328 01:07:45.370527   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:45.412120   12896 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0328 01:07:45.412206   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:45.412283   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:45.412283   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:45.412283   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:45.412283   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:45.412283   12896 round_trippers.go:580]     Content-Length: 291
	I0328 01:07:45.412283   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:45 GMT
	I0328 01:07:45.412283   12896 round_trippers.go:580]     Audit-Id: aaf7e0b6-1465-47cf-9663-c9ee7ed4c002
	I0328 01:07:45.412283   12896 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"17bc3f37-1942-477c-bce3-5e4800f160b6","resourceVersion":"396","creationTimestamp":"2024-03-28T01:07:31Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0328 01:07:45.869511   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:45.869586   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:45.869511   12896 round_trippers.go:463] GET https://172.28.227.122:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0328 01:07:45.869586   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:45.869704   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:45.869704   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:45.869586   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:45.869753   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:45.873350   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:45.874241   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:45.874300   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:45.874300   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:45.874337   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:45.874337   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:45.874337   12896 round_trippers.go:580]     Content-Length: 291
	I0328 01:07:45.874480   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:45 GMT
	I0328 01:07:45.874480   12896 round_trippers.go:580]     Audit-Id: 2e08ae42-6845-4d54-aa1e-a567a6b13c87
	I0328 01:07:45.874480   12896 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"17bc3f37-1942-477c-bce3-5e4800f160b6","resourceVersion":"408","creationTimestamp":"2024-03-28T01:07:31Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0328 01:07:45.874480   12896 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-240000" context rescaled to 1 replicas
	I0328 01:07:45.875195   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:45.875275   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:45.875275   12896 round_trippers.go:580]     Audit-Id: db7443d6-5ef0-4281-b3fc-6b6b1d853d69
	I0328 01:07:45.875275   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:45.875275   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:45.875361   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:45.875393   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:45.875393   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:45 GMT
	I0328 01:07:45.875738   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:46.361420   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:46.361420   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:46.361420   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:46.361420   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:46.370099   12896 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:07:46.370099   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:46.370099   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:46.370099   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:46.370099   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:46.370099   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:46 GMT
	I0328 01:07:46.370099   12896 round_trippers.go:580]     Audit-Id: 4f4e6e55-bc8e-4926-98a4-efbd222d7f29
	I0328 01:07:46.370099   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:46.371147   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:46.848334   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:07:46.849332   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:46.850391   12896 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:07:46.851255   12896 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.227.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:07:46.851897   12896 addons.go:234] Setting addon default-storageclass=true in "multinode-240000"
	I0328 01:07:46.852016   12896 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:07:46.853168   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:07:46.854424   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:07:46.854577   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:46.858056   12896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:07:46.860378   12896 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:46.860378   12896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0328 01:07:46.860378   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:07:46.863534   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:46.863534   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:46.864576   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:46.864576   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:46.869545   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:46.869545   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:46.869545   12896 round_trippers.go:580]     Audit-Id: c9a7a9cf-0a32-46c2-8fb4-6cb2d85f9af1
	I0328 01:07:46.869545   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:46.869545   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:46.869545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:46.869545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:46.869545   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:46 GMT
	I0328 01:07:46.869545   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:47.370343   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:47.370416   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:47.370491   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:47.370491   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:47.375832   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:47.375902   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:47.375902   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:47.375902   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:47.375902   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:47 GMT
	I0328 01:07:47.375990   12896 round_trippers.go:580]     Audit-Id: 13bacca5-d647-444c-9992-e1bb8117be1e
	I0328 01:07:47.376049   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:47.376049   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:47.376549   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:47.377301   12896 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:07:47.862374   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:47.862374   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:47.862374   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:47.862374   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:47.866766   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:47.866857   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:47.866923   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:47.866923   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:47.866923   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:47 GMT
	I0328 01:07:47.866923   12896 round_trippers.go:580]     Audit-Id: 44f64574-2ae7-4a2a-b249-92e14e7b7d29
	I0328 01:07:47.866923   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:47.867006   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:47.867106   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:48.372555   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:48.372625   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:48.372625   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:48.372625   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:48.377547   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:48.377700   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:48.377700   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:48.377700   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:48 GMT
	I0328 01:07:48.377700   12896 round_trippers.go:580]     Audit-Id: 803b6d8b-2eef-4f3f-89f0-596a66b2428b
	I0328 01:07:48.377700   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:48.377700   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:48.377700   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:48.378528   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:48.866455   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:48.866651   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:48.866651   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:48.866651   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:48.871812   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:48.871903   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:48.871903   12896 round_trippers.go:580]     Audit-Id: d080a87d-ffe5-49e3-b33f-86234b506cdf
	I0328 01:07:48.871903   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:48.871903   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:48.871903   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:48.871980   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:48.871980   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:48 GMT
	I0328 01:07:48.872387   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:49.318439   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:07:49.318439   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:07:49.318439   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:49.318439   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:49.319443   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:07:49.319443   12896 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:49.319443   12896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0328 01:07:49.319443   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:07:49.358914   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:49.358914   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:49.358914   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:49.358914   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:49.363949   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:49.364993   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:49.365062   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:49 GMT
	I0328 01:07:49.365062   12896 round_trippers.go:580]     Audit-Id: 10b784bd-9076-4179-a7cd-a4aea36d3be4
	I0328 01:07:49.365062   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:49.365123   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:49.365123   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:49.365182   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:49.367001   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:49.869134   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:49.869134   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:49.869241   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:49.869241   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:49.872828   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:49.873308   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:49.873308   12896 round_trippers.go:580]     Audit-Id: 30c0d747-363f-4520-9f2f-152d0fef00b3
	I0328 01:07:49.873308   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:49.873430   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:49.873430   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:49.873485   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:49.873485   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:49 GMT
	I0328 01:07:49.874114   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:49.874834   12896 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:07:50.360860   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:50.360860   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:50.360860   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:50.360860   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:50.365226   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:50.365397   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:50.365397   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:50.365397   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:50.365397   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:50.365397   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:50.365397   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:50 GMT
	I0328 01:07:50.365489   12896 round_trippers.go:580]     Audit-Id: e693e7a3-4f19-4e4b-b86d-ff80cc9db43e
	I0328 01:07:50.365721   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:50.868840   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:50.868926   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:50.868926   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:50.868926   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:50.872105   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:50.872498   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:50.872498   12896 round_trippers.go:580]     Audit-Id: 00b0410b-9e82-4182-ac86-eb87b39b75e4
	I0328 01:07:50.872498   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:50.872498   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:50.872498   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:50.872498   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:50.872498   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:50 GMT
	I0328 01:07:50.873131   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:51.358275   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:51.358370   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:51.358370   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:51.358370   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:51.362999   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:51.362999   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:51.363080   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:51.363080   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:51.363080   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:51.363080   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:51.363080   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:51 GMT
	I0328 01:07:51.363080   12896 round_trippers.go:580]     Audit-Id: 2e5f0ab6-943d-45b3-8017-60b6b4c91050
	I0328 01:07:51.363541   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:51.724270   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:07:51.724772   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:51.724845   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:07:51.866328   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:51.866328   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:51.866328   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:51.866328   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:51.870945   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:51.871368   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:51.871368   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:51 GMT
	I0328 01:07:51.871506   12896 round_trippers.go:580]     Audit-Id: 04a67877-d1bb-4d0d-8124-1ce8a4e33290
	I0328 01:07:51.871506   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:51.871506   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:51.871506   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:51.871506   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:51.871580   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:52.146373   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:07:52.146755   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:52.147123   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:07:52.310576   12896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0328 01:07:52.358996   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:52.359061   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:52.359061   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:52.359061   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:52.363352   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:52.363352   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:52.363352   12896 round_trippers.go:580]     Audit-Id: 07356a87-de03-4bcb-a250-073b4f17f4c7
	I0328 01:07:52.363352   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:52.363352   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:52.363352   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:52.363352   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:52.363352   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:52 GMT
	I0328 01:07:52.363352   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:52.364360   12896 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:07:52.865382   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:52.865452   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:52.865452   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:52.865452   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:52.869328   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:52.870162   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:52.870162   12896 round_trippers.go:580]     Audit-Id: 76a3f19d-16a5-4bb0-8f99-5aec87cd052e
	I0328 01:07:52.870162   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:52.870162   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:52.870162   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:52.870162   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:52.870162   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:52 GMT
	I0328 01:07:52.870532   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:53.008330   12896 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0328 01:07:53.008418   12896 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0328 01:07:53.008418   12896 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0328 01:07:53.008418   12896 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0328 01:07:53.008418   12896 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0328 01:07:53.008418   12896 command_runner.go:130] > pod/storage-provisioner created
	I0328 01:07:53.358858   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:53.358858   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:53.358858   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:53.358858   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:53.363915   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:53.363987   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:53.363987   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:53 GMT
	I0328 01:07:53.363987   12896 round_trippers.go:580]     Audit-Id: 83e64c65-1919-4a2c-a470-5be3854bfe86
	I0328 01:07:53.363987   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:53.363987   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:53.363987   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:53.363987   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:53.363987   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:53.867350   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:53.867579   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:53.867579   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:53.867579   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:53.874494   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:07:53.874494   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:53.874494   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:53 GMT
	I0328 01:07:53.874579   12896 round_trippers.go:580]     Audit-Id: 19ba7bbf-36e0-4556-8984-9c20f45e3b2e
	I0328 01:07:53.874597   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:53.874597   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:53.874597   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:53.874617   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:53.874617   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:54.367209   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:54.367209   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:54.367209   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:54.367209   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:54.371847   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:54.371847   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:54.371847   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:54.371847   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:54.371847   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:54 GMT
	I0328 01:07:54.371847   12896 round_trippers.go:580]     Audit-Id: c080dcc5-7535-4ebd-875f-181758d99840
	I0328 01:07:54.371847   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:54.371847   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:54.373344   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:54.373344   12896 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:07:54.478482   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:07:54.478482   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:07:54.479412   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:07:54.620446   12896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0328 01:07:54.807219   12896 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0328 01:07:54.807494   12896 round_trippers.go:463] GET https://172.28.227.122:8443/apis/storage.k8s.io/v1/storageclasses
	I0328 01:07:54.807494   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:54.807576   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:54.807576   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:54.819460   12896 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:07:54.819460   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:54.819460   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:54.819460   12896 round_trippers.go:580]     Content-Length: 1273
	I0328 01:07:54.819460   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:54 GMT
	I0328 01:07:54.819460   12896 round_trippers.go:580]     Audit-Id: 64376486-8a3f-4774-be8b-6ffa03654031
	I0328 01:07:54.819460   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:54.819460   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:54.819460   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:54.820462   12896 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"435"},"items":[{"metadata":{"name":"standard","uid":"78e50f23-7e51-483d-8c61-05d6558871b9","resourceVersion":"435","creationTimestamp":"2024-03-28T01:07:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-28T01:07:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0328 01:07:54.820462   12896 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"78e50f23-7e51-483d-8c61-05d6558871b9","resourceVersion":"435","creationTimestamp":"2024-03-28T01:07:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-28T01:07:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0328 01:07:54.820462   12896 round_trippers.go:463] PUT https://172.28.227.122:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0328 01:07:54.820462   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:54.820462   12896 round_trippers.go:473]     Content-Type: application/json
	I0328 01:07:54.820462   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:54.820462   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:54.824481   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:54.825520   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:54.825560   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:54.825560   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:54.825560   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:54.825560   12896 round_trippers.go:580]     Content-Length: 1220
	I0328 01:07:54.825560   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:54 GMT
	I0328 01:07:54.825596   12896 round_trippers.go:580]     Audit-Id: 3dd1d58a-8fc2-4054-a6d8-148060345d10
	I0328 01:07:54.825596   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:54.825698   12896 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"78e50f23-7e51-483d-8c61-05d6558871b9","resourceVersion":"435","creationTimestamp":"2024-03-28T01:07:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-03-28T01:07:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0328 01:07:54.835991   12896 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0328 01:07:54.839655   12896 addons.go:505] duration metric: took 10.4208504s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0328 01:07:54.871115   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:54.871179   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:54.871179   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:54.871179   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:54.876004   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:54.876004   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:54.876078   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:54.876078   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:54.876078   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:54 GMT
	I0328 01:07:54.876078   12896 round_trippers.go:580]     Audit-Id: 7c61f7ad-f206-4a99-a835-6e42f8bc9e78
	I0328 01:07:54.876078   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:54.876078   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:54.876949   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:55.360692   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:55.360692   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:55.360692   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:55.360692   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:55.367490   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:07:55.367490   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:55.367490   12896 round_trippers.go:580]     Audit-Id: 68d6ebd8-00d0-419f-8497-61d35ae3f6b9
	I0328 01:07:55.367490   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:55.367490   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:55.367490   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:55.367490   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:55.367490   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:55 GMT
	I0328 01:07:55.367490   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:55.865054   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:55.865116   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:55.865116   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:55.865116   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:55.868971   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:55.868971   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:55.868971   12896 round_trippers.go:580]     Audit-Id: 56759609-38a2-450c-b47e-129e0c3fd163
	I0328 01:07:55.868971   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:55.868971   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:55.868971   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:55.868971   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:55.868971   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:55 GMT
	I0328 01:07:55.869648   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:56.366131   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:56.366311   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:56.366311   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:56.366367   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:56.370991   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:56.371449   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:56.371449   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:56.371449   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:56.371449   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:56.371449   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:56 GMT
	I0328 01:07:56.371449   12896 round_trippers.go:580]     Audit-Id: ac306b08-4665-4749-9239-58bd769ddf74
	I0328 01:07:56.371449   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:56.372145   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:56.867657   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:56.867739   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:56.867739   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:56.867739   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:56.875020   12896 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:07:56.875794   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:56.875794   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:56 GMT
	I0328 01:07:56.875794   12896 round_trippers.go:580]     Audit-Id: a90f7bc2-5677-4251-a845-e106c0ed0c54
	I0328 01:07:56.875794   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:56.875794   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:56.875794   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:56.875794   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:56.876086   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:56.876662   12896 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:07:57.371312   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:57.371389   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:57.371389   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:57.371389   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:57.376044   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:57.376044   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:57.376483   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:57.376483   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:57.376483   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:57 GMT
	I0328 01:07:57.376483   12896 round_trippers.go:580]     Audit-Id: 150d6c9c-81fe-4824-bf1c-8fd48b4214ae
	I0328 01:07:57.376483   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:57.376483   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:57.376696   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"362","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4936 chars]
	I0328 01:07:57.869901   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:57.870085   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:57.870085   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:57.870085   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:57.877916   12896 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:07:57.877916   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:57.877916   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:57 GMT
	I0328 01:07:57.877916   12896 round_trippers.go:580]     Audit-Id: 864e30ec-3878-45c7-8382-4c8105675720
	I0328 01:07:57.877916   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:57.877916   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:57.877916   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:57.877916   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:57.878932   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:57.878932   12896 node_ready.go:49] node "multinode-240000" has status "Ready":"True"
	I0328 01:07:57.878932   12896 node_ready.go:38] duration metric: took 12.5214376s for node "multinode-240000" to be "Ready" ...
	I0328 01:07:57.878932   12896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:07:57.878932   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:07:57.878932   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:57.878932   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:57.878932   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:57.883940   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:57.884620   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:57.884620   12896 round_trippers.go:580]     Audit-Id: b39c5634-5b4e-47ee-9225-743a73404fa5
	I0328 01:07:57.884620   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:57.884620   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:57.884620   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:57.884620   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:57.884620   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:57 GMT
	I0328 01:07:57.885653   12896 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"443","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54514 chars]
	I0328 01:07:57.890048   12896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:57.890048   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:07:57.890048   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:57.890048   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:57.890048   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:57.900051   12896 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 01:07:57.900051   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:57.900051   12896 round_trippers.go:580]     Audit-Id: c8c9273b-152c-4322-81ac-ef856059a7fb
	I0328 01:07:57.900051   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:57.900051   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:57.900051   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:57.900051   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:57.900051   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:57 GMT
	I0328 01:07:57.900051   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"443","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0328 01:07:57.901085   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:57.901085   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:57.901085   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:57.901085   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:57.904042   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:07:57.904042   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:57.904042   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:57.904042   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:57.904042   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:57.904042   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:57 GMT
	I0328 01:07:57.904042   12896 round_trippers.go:580]     Audit-Id: 834d5135-176d-4311-b1fc-f28f368bc9f2
	I0328 01:07:57.904042   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:57.904576   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:58.405359   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:07:58.405387   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:58.405387   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:58.405387   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:58.409087   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:58.409087   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:58.409087   12896 round_trippers.go:580]     Audit-Id: c8062852-94bf-44de-bb9b-bc2e74614b9e
	I0328 01:07:58.409453   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:58.409453   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:58.409453   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:58.409453   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:58.409453   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:58 GMT
	I0328 01:07:58.409846   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"443","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0328 01:07:58.410908   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:58.410908   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:58.410908   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:58.410908   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:58.416567   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:58.416567   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:58.416567   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:58.416567   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:58.416567   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:58 GMT
	I0328 01:07:58.416567   12896 round_trippers.go:580]     Audit-Id: c4794f48-7d3f-4d98-9ce2-8911c32c90dd
	I0328 01:07:58.416567   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:58.416567   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:58.417117   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:58.894964   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:07:58.895017   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:58.895087   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:58.895087   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:58.902400   12896 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:07:58.902400   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:58.902400   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:58.902400   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:58.902400   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:58 GMT
	I0328 01:07:58.902400   12896 round_trippers.go:580]     Audit-Id: a822fdd9-eaf9-4536-a845-99468b15ee01
	I0328 01:07:58.902400   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:58.902400   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:58.903128   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"443","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0328 01:07:58.903902   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:58.903902   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:58.903902   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:58.903902   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:58.906755   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:07:58.906755   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:58.906755   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:58.906755   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:58.906755   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:58.906755   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:58 GMT
	I0328 01:07:58.906755   12896 round_trippers.go:580]     Audit-Id: 54cbdb45-b047-48c0-b6d4-fdf6e6711b81
	I0328 01:07:58.906755   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:58.907792   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.396421   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:07:59.396593   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.396593   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.396593   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.400177   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:07:59.400177   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.400177   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.400177   12896 round_trippers.go:580]     Audit-Id: 39d41f14-126c-41be-b009-59e63c3d707a
	I0328 01:07:59.400177   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.400177   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.400177   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.400177   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.401635   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"443","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6449 chars]
	I0328 01:07:59.401947   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.401947   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.401947   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.401947   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.404720   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:07:59.405720   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.405720   12896 round_trippers.go:580]     Audit-Id: 8dc4913b-a870-4fd9-af28-f5b0bc9d124e
	I0328 01:07:59.405720   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.405720   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.405720   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.405720   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.405799   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.406064   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.895230   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:07:59.895291   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.895291   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.895291   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.899879   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:07:59.900306   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.900306   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.900306   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.900306   12896 round_trippers.go:580]     Audit-Id: 0f962128-048a-402f-8b7e-9a1b5898df2d
	I0328 01:07:59.900306   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.900306   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.900306   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.900527   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"456","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0328 01:07:59.901295   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.901295   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.901295   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.901295   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.907549   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:07:59.907674   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.907674   12896 round_trippers.go:580]     Audit-Id: 547ed099-74a7-4e12-b5ab-b3966dbb22fb
	I0328 01:07:59.907674   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.907674   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.907674   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.907674   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.907674   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.907674   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.908273   12896 pod_ready.go:92] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:59.908273   12896 pod_ready.go:81] duration metric: took 2.0182107s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.908359   12896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.908474   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:07:59.908474   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.908474   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.908474   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.914072   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:59.914072   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.914072   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.914072   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.914072   12896 round_trippers.go:580]     Audit-Id: 33a8f282-0fcd-490c-a819-4e5918385f0c
	I0328 01:07:59.914072   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.914072   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.914072   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.914072   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93","resourceVersion":"418","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.227.122:2379","kubernetes.io/config.hash":"3bf911dad00226d1456d6201aff35c8b","kubernetes.io/config.mirror":"3bf911dad00226d1456d6201aff35c8b","kubernetes.io/config.seen":"2024-03-28T01:07:31.458002457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0328 01:07:59.914072   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.914072   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.914072   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.914072   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.932065   12896 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0328 01:07:59.932065   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.932065   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.932065   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.932065   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.932065   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.932065   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.932065   12896 round_trippers.go:580]     Audit-Id: 6e3e74d5-d36d-4652-a935-7ff4fdfbbd07
	I0328 01:07:59.932065   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.933058   12896 pod_ready.go:92] pod "etcd-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:59.933058   12896 pod_ready.go:81] duration metric: took 24.6983ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.933058   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.933058   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:07:59.933058   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.933058   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.933058   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.938072   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:59.938072   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.938072   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.938072   12896 round_trippers.go:580]     Audit-Id: 8f020db7-c26c-4ee4-a67b-806c51463490
	I0328 01:07:59.938072   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.938072   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.938072   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.938072   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.938965   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"7736298d-3898-4693-84bf-2311305bf52c","resourceVersion":"420","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.227.122:8443","kubernetes.io/config.hash":"08b85a8adf05b50d7739532a291175d4","kubernetes.io/config.mirror":"08b85a8adf05b50d7739532a291175d4","kubernetes.io/config.seen":"2024-03-28T01:07:31.458006857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0328 01:07:59.939401   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.939401   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.939401   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.939401   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.945545   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:07:59.945545   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.945545   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.945545   12896 round_trippers.go:580]     Audit-Id: eeb5487c-bc5e-44c3-9677-b9f62fe67868
	I0328 01:07:59.945545   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.945545   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.945545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.945545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.946093   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.946246   12896 pod_ready.go:92] pod "kube-apiserver-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:59.946246   12896 pod_ready.go:81] duration metric: took 13.1877ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.946246   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.946246   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:07:59.946246   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.946246   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.946246   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.948830   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:07:59.949409   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.949409   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.949409   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.949409   12896 round_trippers.go:580]     Audit-Id: d44c1b4a-934c-4415-b9bd-beb7dd4d958a
	I0328 01:07:59.949409   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.949409   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.949471   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.949697   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"423","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0328 01:07:59.950283   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.950283   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.950348   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.950348   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.958496   12896 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:07:59.958496   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.958496   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.958496   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.958496   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.958496   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.958496   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.958496   12896 round_trippers.go:580]     Audit-Id: cea40655-60da-4152-b8fa-45fce260ff01
	I0328 01:07:59.958496   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.959475   12896 pod_ready.go:92] pod "kube-controller-manager-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:59.959475   12896 pod_ready.go:81] duration metric: took 13.2294ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.959475   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.959475   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:07:59.959475   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.959475   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.959475   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.964487   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:07:59.964487   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.964487   12896 round_trippers.go:580]     Audit-Id: 82bf959b-03c3-4de9-ad05-11a2f8131329
	I0328 01:07:59.964487   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.964487   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.964487   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.964487   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.964487   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.964487   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"413","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0328 01:07:59.965462   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:07:59.965462   12896 round_trippers.go:469] Request Headers:
	I0328 01:07:59.965462   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:07:59.965462   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:07:59.967476   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:07:59.967476   12896 round_trippers.go:577] Response Headers:
	I0328 01:07:59.968475   12896 round_trippers.go:580]     Audit-Id: a2f54e3d-4fcc-4cbe-8bd6-611784cb57b7
	I0328 01:07:59.968475   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:07:59.968475   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:07:59.968475   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:07:59.968475   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:07:59.968475   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:07:59 GMT
	I0328 01:07:59.968475   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:07:59.969477   12896 pod_ready.go:92] pod "kube-proxy-47rqg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:07:59.969477   12896 pod_ready.go:81] duration metric: took 10.0017ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:07:59.969477   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:00.098859   12896 request.go:629] Waited for 129.3808ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:08:00.099084   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:08:00.099084   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.099084   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.099084   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.102721   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:08:00.102721   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.102721   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.102721   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.103138   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.103138   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.103138   12896 round_trippers.go:580]     Audit-Id: be708048-dfcd-4a1a-bb98-a13657de4e11
	I0328 01:08:00.103138   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.103199   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"419","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0328 01:08:00.300690   12896 request.go:629] Waited for 196.403ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:08:00.300981   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:08:00.300981   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.300981   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.300981   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.305835   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:08:00.305835   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.305835   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.305835   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.305835   12896 round_trippers.go:580]     Audit-Id: ecbf1acc-eb82-4b7a-ad25-7c803fac2b12
	I0328 01:08:00.305835   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.305835   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.306342   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.306805   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4791 chars]
	I0328 01:08:00.307220   12896 pod_ready.go:92] pod "kube-scheduler-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:08:00.307220   12896 pod_ready.go:81] duration metric: took 337.7409ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:08:00.307220   12896 pod_ready.go:38] duration metric: took 2.4282721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:08:00.307220   12896 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:08:00.321495   12896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:08:00.351753   12896 command_runner.go:130] > 2234
	I0328 01:08:00.352025   12896 api_server.go:72] duration metric: took 15.9332805s to wait for apiserver process to appear ...
	I0328 01:08:00.352025   12896 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:08:00.352025   12896 api_server.go:253] Checking apiserver healthz at https://172.28.227.122:8443/healthz ...
	I0328 01:08:00.360286   12896 api_server.go:279] https://172.28.227.122:8443/healthz returned 200:
	ok
	I0328 01:08:00.361120   12896 round_trippers.go:463] GET https://172.28.227.122:8443/version
	I0328 01:08:00.361120   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.361120   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.361120   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.363178   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:08:00.363178   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.363178   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.363178   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.363178   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.363178   12896 round_trippers.go:580]     Content-Length: 263
	I0328 01:08:00.363703   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.363703   12896 round_trippers.go:580]     Audit-Id: f7f7377a-baad-4770-aaba-21215d73f560
	I0328 01:08:00.363703   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.363753   12896 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0328 01:08:00.363848   12896 api_server.go:141] control plane version: v1.29.3
	I0328 01:08:00.363914   12896 api_server.go:131] duration metric: took 11.8887ms to wait for apiserver health ...
	I0328 01:08:00.363958   12896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:08:00.502154   12896 request.go:629] Waited for 138.1177ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:08:00.502504   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:08:00.502504   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.502504   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.502504   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.508050   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:08:00.508050   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.508050   12896 round_trippers.go:580]     Audit-Id: ad5ae035-a9a1-4eaf-837f-bf3f5a959019
	I0328 01:08:00.508050   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.508050   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.508050   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.508050   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.508512   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.509670   12896 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"456","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56498 chars]
	I0328 01:08:00.512395   12896 system_pods.go:59] 8 kube-system pods found
	I0328 01:08:00.512452   12896 system_pods.go:61] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:08:00.512452   12896 system_pods.go:61] "etcd-multinode-240000" [8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93] Running
	I0328 01:08:00.512452   12896 system_pods.go:61] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:08:00.512452   12896 system_pods.go:61] "kube-apiserver-multinode-240000" [7736298d-3898-4693-84bf-2311305bf52c] Running
	I0328 01:08:00.512452   12896 system_pods.go:61] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:08:00.512512   12896 system_pods.go:61] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:08:00.512512   12896 system_pods.go:61] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:08:00.512536   12896 system_pods.go:61] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:08:00.512536   12896 system_pods.go:74] duration metric: took 148.5764ms to wait for pod list to return data ...
	I0328 01:08:00.512563   12896 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:08:00.704456   12896 request.go:629] Waited for 191.411ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/default/serviceaccounts
	I0328 01:08:00.704605   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/default/serviceaccounts
	I0328 01:08:00.704605   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.704605   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.704687   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.709334   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:08:00.709334   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.709334   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.709334   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.709334   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.710244   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.710244   12896 round_trippers.go:580]     Content-Length: 261
	I0328 01:08:00.710244   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.710244   12896 round_trippers.go:580]     Audit-Id: 816ce9db-0bd7-4093-b13f-28cbdbae27bb
	I0328 01:08:00.710244   12896 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8bb5dc68-e1fd-49c8-89aa-9b79f7d72fc2","resourceVersion":"356","creationTimestamp":"2024-03-28T01:07:44Z"}}]}
	I0328 01:08:00.710401   12896 default_sa.go:45] found service account: "default"
	I0328 01:08:00.710401   12896 default_sa.go:55] duration metric: took 197.8364ms for default service account to be created ...
	I0328 01:08:00.710401   12896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:08:00.907434   12896 request.go:629] Waited for 196.7577ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:08:00.907434   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:08:00.907434   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:00.907434   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:00.907434   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:00.914467   12896 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:08:00.914538   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:00.914538   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:00 GMT
	I0328 01:08:00.914538   12896 round_trippers.go:580]     Audit-Id: f1e1026d-cd3b-4ef7-998a-c8e7366f4a25
	I0328 01:08:00.914538   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:00.914538   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:00.914621   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:00.914621   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:00.915417   12896 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"456","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56498 chars]
	I0328 01:08:00.918407   12896 system_pods.go:86] 8 kube-system pods found
	I0328 01:08:00.918407   12896 system_pods.go:89] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "etcd-multinode-240000" [8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "kube-apiserver-multinode-240000" [7736298d-3898-4693-84bf-2311305bf52c] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:08:00.918407   12896 system_pods.go:89] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:08:00.918407   12896 system_pods.go:126] duration metric: took 208.0047ms to wait for k8s-apps to be running ...
	I0328 01:08:00.918407   12896 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:08:00.931425   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:08:00.960624   12896 system_svc.go:56] duration metric: took 42.2171ms WaitForService to wait for kubelet
	I0328 01:08:00.960691   12896 kubeadm.go:576] duration metric: took 16.5419423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:08:00.960691   12896 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:08:01.096143   12896 request.go:629] Waited for 135.3371ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes
	I0328 01:08:01.096772   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes
	I0328 01:08:01.096843   12896 round_trippers.go:469] Request Headers:
	I0328 01:08:01.096843   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:08:01.096843   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:08:01.101118   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:08:01.101118   12896 round_trippers.go:577] Response Headers:
	I0328 01:08:01.101118   12896 round_trippers.go:580]     Audit-Id: 25175a9b-3784-4683-9a27-0234e57c5e3b
	I0328 01:08:01.101118   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:08:01.101118   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:08:01.101118   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:08:01.101118   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:08:01.101118   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:08:01 GMT
	I0328 01:08:01.101491   12896 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"438","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 4844 chars]
	I0328 01:08:01.102073   12896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:08:01.102178   12896 node_conditions.go:123] node cpu capacity is 2
	I0328 01:08:01.102238   12896 node_conditions.go:105] duration metric: took 141.5458ms to run NodePressure ...
	I0328 01:08:01.102238   12896 start.go:240] waiting for startup goroutines ...
	I0328 01:08:01.102287   12896 start.go:245] waiting for cluster config update ...
	I0328 01:08:01.102320   12896 start.go:254] writing updated cluster config ...
	I0328 01:08:01.107382   12896 out.go:177] 
	I0328 01:08:01.110455   12896 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:08:01.117276   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:08:01.117276   12896 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:08:01.121770   12896 out.go:177] * Starting "multinode-240000-m02" worker node in "multinode-240000" cluster
	I0328 01:08:01.126762   12896 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:08:01.126964   12896 cache.go:56] Caching tarball of preloaded images
	I0328 01:08:01.127128   12896 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:08:01.127128   12896 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:08:01.127128   12896 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:08:01.133513   12896 start.go:360] acquireMachinesLock for multinode-240000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:08:01.133834   12896 start.go:364] duration metric: took 240.3µs to acquireMachinesLock for "multinode-240000-m02"
	I0328 01:08:01.134071   12896 start.go:93] Provisioning new machine with config: &{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0328 01:08:01.134391   12896 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0328 01:08:01.138024   12896 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0328 01:08:01.138024   12896 start.go:159] libmachine.API.Create for "multinode-240000" (driver="hyperv")
	I0328 01:08:01.138573   12896 client.go:168] LocalClient.Create starting
	I0328 01:08:01.138712   12896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0328 01:08:01.139445   12896 main.go:141] libmachine: Decoding PEM data...
	I0328 01:08:01.139445   12896 main.go:141] libmachine: Parsing certificate...
	I0328 01:08:01.139445   12896 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0328 01:08:01.139445   12896 main.go:141] libmachine: Decoding PEM data...
	I0328 01:08:01.139445   12896 main.go:141] libmachine: Parsing certificate...
	I0328 01:08:01.139445   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0328 01:08:03.296217   12896 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0328 01:08:03.296217   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:03.297126   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0328 01:08:05.181825   12896 main.go:141] libmachine: [stdout =====>] : False
	
	I0328 01:08:05.181825   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:05.181825   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 01:08:06.800066   12896 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 01:08:06.800526   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:06.800607   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 01:08:10.807573   12896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 01:08:10.807573   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:10.810502   12896 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1711559712-18485-amd64.iso...
	I0328 01:08:11.338472   12896 main.go:141] libmachine: Creating SSH key...
	I0328 01:08:11.703442   12896 main.go:141] libmachine: Creating VM...
	I0328 01:08:11.703442   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0328 01:08:14.877664   12896 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0328 01:08:14.877992   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:14.877992   12896 main.go:141] libmachine: Using switch "Default Switch"
	I0328 01:08:14.878100   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0328 01:08:16.803655   12896 main.go:141] libmachine: [stdout =====>] : True
	
	I0328 01:08:16.803655   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:16.804401   12896 main.go:141] libmachine: Creating VHD
	I0328 01:08:16.804468   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0328 01:08:20.806520   12896 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 13823A96-A212-4CF3-B960-AC94ED738FD0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0328 01:08:20.806520   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:20.806520   12896 main.go:141] libmachine: Writing magic tar header
	I0328 01:08:20.806520   12896 main.go:141] libmachine: Writing SSH key tar header
	I0328 01:08:20.816449   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0328 01:08:24.211778   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:24.212417   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:24.212417   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\disk.vhd' -SizeBytes 20000MB
	I0328 01:08:26.904033   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:26.904033   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:26.904600   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-240000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0328 01:08:30.813461   12896 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-240000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0328 01:08:30.813461   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:30.814740   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-240000-m02 -DynamicMemoryEnabled $false
	I0328 01:08:33.214846   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:33.214846   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:33.214846   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-240000-m02 -Count 2
	I0328 01:08:35.530000   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:35.530000   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:35.530464   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-240000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\boot2docker.iso'
	I0328 01:08:38.349159   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:38.349159   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:38.349492   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-240000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\disk.vhd'
	I0328 01:08:41.196748   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:41.197453   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:41.197453   12896 main.go:141] libmachine: Starting VM...
	I0328 01:08:41.197536   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000-m02
	I0328 01:08:44.518827   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:44.518827   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:44.518827   12896 main.go:141] libmachine: Waiting for host to start...
	I0328 01:08:44.518827   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:08:46.950739   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:08:46.950739   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:46.951149   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:08:49.658090   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:49.658509   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:50.673579   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:08:53.075435   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:08:53.075435   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:53.076137   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:08:55.810228   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:08:55.810228   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:56.818655   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:08:59.179638   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:08:59.179638   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:08:59.179996   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:01.935729   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:09:01.935729   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:02.937642   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:05.258907   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:05.259413   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:05.259471   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:08.005823   12896 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:09:08.005823   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:09.019338   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:11.379640   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:11.379640   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:11.379640   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:14.168657   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:14.168765   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:14.168886   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:16.482004   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:16.482004   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:16.482116   12896 machine.go:94] provisionDockerMachine start ...
	I0328 01:09:16.482237   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:18.794430   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:18.794430   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:18.794938   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:21.560342   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:21.560641   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:21.566677   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:09:21.566972   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:09:21.566972   12896 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:09:21.684180   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:09:21.684252   12896 buildroot.go:166] provisioning hostname "multinode-240000-m02"
	I0328 01:09:21.684252   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:23.977744   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:23.977798   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:23.977798   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:26.692482   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:26.692482   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:26.699233   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:09:26.699895   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:09:26.699895   12896 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-240000-m02 && echo "multinode-240000-m02" | sudo tee /etc/hostname
	I0328 01:09:26.858505   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-240000-m02
	
	I0328 01:09:26.858641   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:29.142341   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:29.142341   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:29.142507   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:31.897679   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:31.897679   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:31.904274   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:09:31.904904   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:09:31.904962   12896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-240000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-240000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-240000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:09:32.052945   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:09:32.052945   12896 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 01:09:32.052945   12896 buildroot.go:174] setting up certificates
	I0328 01:09:32.052945   12896 provision.go:84] configureAuth start
	I0328 01:09:32.052945   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:34.368479   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:34.368479   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:34.368479   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:37.135067   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:37.135067   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:37.135333   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:39.442763   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:39.442763   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:39.443751   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:42.222359   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:42.222359   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:42.223480   12896 provision.go:143] copyHostCerts
	I0328 01:09:42.223927   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 01:09:42.224258   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 01:09:42.224258   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 01:09:42.224680   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 01:09:42.225670   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 01:09:42.226076   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 01:09:42.226076   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 01:09:42.226467   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 01:09:42.227461   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 01:09:42.227610   12896 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 01:09:42.227610   12896 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 01:09:42.227610   12896 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 01:09:42.228861   12896 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-240000-m02 san=[127.0.0.1 172.28.230.250 localhost minikube multinode-240000-m02]
	I0328 01:09:42.610446   12896 provision.go:177] copyRemoteCerts
	I0328 01:09:42.622468   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:09:42.622468   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:44.946422   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:44.946422   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:44.947055   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:47.690535   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:47.690905   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:47.691866   12896 sshutil.go:53] new ssh client: &{IP:172.28.230.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\id_rsa Username:docker}
	I0328 01:09:47.798458   12896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.1759558s)
	I0328 01:09:47.798580   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 01:09:47.798723   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:09:47.850808   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 01:09:47.851351   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0328 01:09:47.904200   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 01:09:47.904677   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:09:47.957981   12896 provision.go:87] duration metric: took 15.9048752s to configureAuth
	I0328 01:09:47.957981   12896 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:09:47.958554   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:09:47.958607   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:50.297968   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:50.298962   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:50.299078   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:53.043074   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:53.043749   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:53.048626   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:09:53.049557   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:09:53.049557   12896 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 01:09:53.177706   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 01:09:53.177762   12896 buildroot.go:70] root file system type: tmpfs
	I0328 01:09:53.177932   12896 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 01:09:53.178001   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:09:55.500580   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:09:55.500796   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:55.500796   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:09:58.232388   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:09:58.232388   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:09:58.239227   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:09:58.239306   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:09:58.239842   12896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.227.122"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 01:09:58.406225   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.227.122
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 01:09:58.406277   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:00.701781   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:00.701781   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:00.702462   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:03.492000   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:03.492762   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:03.499354   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:10:03.499865   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:10:03.499956   12896 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 01:10:05.742852   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 01:10:05.742991   12896 machine.go:97] duration metric: took 49.2605445s to provisionDockerMachine
	I0328 01:10:05.742991   12896 client.go:171] duration metric: took 2m4.6035828s to LocalClient.Create
	I0328 01:10:05.742991   12896 start.go:167] duration metric: took 2m4.6041317s to libmachine.API.Create "multinode-240000"
	I0328 01:10:05.743108   12896 start.go:293] postStartSetup for "multinode-240000-m02" (driver="hyperv")
	I0328 01:10:05.743108   12896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:10:05.757965   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:10:05.757965   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:08.032878   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:08.032878   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:08.033763   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:10.811488   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:10.812202   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:10.812752   12896 sshutil.go:53] new ssh client: &{IP:172.28.230.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\id_rsa Username:docker}
	I0328 01:10:10.915355   12896 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1572307s)
	I0328 01:10:10.928046   12896 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:10:10.935710   12896 command_runner.go:130] > NAME=Buildroot
	I0328 01:10:10.935710   12896 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 01:10:10.935710   12896 command_runner.go:130] > ID=buildroot
	I0328 01:10:10.935710   12896 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 01:10:10.935710   12896 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 01:10:10.935975   12896 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:10:10.936054   12896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 01:10:10.936474   12896 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 01:10:10.937226   12896 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 01:10:10.937226   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 01:10:10.950677   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:10:10.969089   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 01:10:11.018536   12896 start.go:296] duration metric: took 5.2753926s for postStartSetup
	I0328 01:10:11.021370   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:13.283056   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:13.283811   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:13.283811   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:16.037765   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:16.037765   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:16.038204   12896 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:10:16.041368   12896 start.go:128] duration metric: took 2m14.9060734s to createHost
	I0328 01:10:16.041508   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:18.335345   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:18.335345   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:18.335345   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:21.104222   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:21.105306   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:21.111344   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:10:21.112106   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:10:21.112106   12896 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:10:21.242304   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711588221.250654639
	
	I0328 01:10:21.242378   12896 fix.go:216] guest clock: 1711588221.250654639
	I0328 01:10:21.242378   12896 fix.go:229] Guest: 2024-03-28 01:10:21.250654639 +0000 UTC Remote: 2024-03-28 01:10:16.0413688 +0000 UTC m=+363.916884701 (delta=5.209285839s)
	I0328 01:10:21.242497   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:23.552229   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:23.553110   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:23.553178   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:26.379810   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:26.379810   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:26.388914   12896 main.go:141] libmachine: Using SSH client type: native
	I0328 01:10:26.388914   12896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.230.250 22 <nil> <nil>}
	I0328 01:10:26.388914   12896 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711588221
	I0328 01:10:26.533808   12896 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 01:10:21 UTC 2024
	
	I0328 01:10:26.533903   12896 fix.go:236] clock set: Thu Mar 28 01:10:21 UTC 2024
	 (err=<nil>)
	I0328 01:10:26.533903   12896 start.go:83] releasing machines lock for "multinode-240000-m02", held for 2m25.3990662s
	I0328 01:10:26.534152   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:28.789228   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:28.789228   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:28.790131   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:31.566284   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:31.566874   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:31.570118   12896 out.go:177] * Found network options:
	I0328 01:10:31.572690   12896 out.go:177]   - NO_PROXY=172.28.227.122
	W0328 01:10:31.574562   12896 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 01:10:31.576953   12896 out.go:177]   - NO_PROXY=172.28.227.122
	W0328 01:10:31.578673   12896 proxy.go:119] fail to check proxy env: Error ip not in block
	W0328 01:10:31.580902   12896 proxy.go:119] fail to check proxy env: Error ip not in block
	I0328 01:10:31.582873   12896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:10:31.582873   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:31.593845   12896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 01:10:31.593845   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:10:33.931718   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:33.931718   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:33.932580   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:33.952245   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:33.952245   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:33.952245   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:36.716896   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:36.717899   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:36.718385   12896 sshutil.go:53] new ssh client: &{IP:172.28.230.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\id_rsa Username:docker}
	I0328 01:10:36.762046   12896 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:10:36.762170   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:36.762827   12896 sshutil.go:53] new ssh client: &{IP:172.28.230.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\id_rsa Username:docker}
	I0328 01:10:36.876822   12896 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 01:10:36.876822   12896 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2939132s)
	I0328 01:10:36.876822   12896 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0328 01:10:36.876822   12896 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.282942s)
	W0328 01:10:36.876822   12896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:10:36.891619   12896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:10:36.923163   12896 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0328 01:10:36.923284   12896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:10:36.923342   12896 start.go:494] detecting cgroup driver to use...
	I0328 01:10:36.923830   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:10:36.960037   12896 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0328 01:10:36.973911   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 01:10:37.006485   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 01:10:37.027746   12896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 01:10:37.040893   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 01:10:37.076558   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:10:37.116343   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 01:10:37.150348   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:10:37.187244   12896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:10:37.222243   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 01:10:37.258832   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 01:10:37.294397   12896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 01:10:37.330940   12896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:10:37.350548   12896 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 01:10:37.367956   12896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:10:37.404137   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:37.619840   12896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 01:10:37.654295   12896 start.go:494] detecting cgroup driver to use...
	I0328 01:10:37.667180   12896 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 01:10:37.690548   12896 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0328 01:10:37.690590   12896 command_runner.go:130] > [Unit]
	I0328 01:10:37.690590   12896 command_runner.go:130] > Description=Docker Application Container Engine
	I0328 01:10:37.690590   12896 command_runner.go:130] > Documentation=https://docs.docker.com
	I0328 01:10:37.690655   12896 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0328 01:10:37.690655   12896 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0328 01:10:37.690655   12896 command_runner.go:130] > StartLimitBurst=3
	I0328 01:10:37.690655   12896 command_runner.go:130] > StartLimitIntervalSec=60
	I0328 01:10:37.690655   12896 command_runner.go:130] > [Service]
	I0328 01:10:37.690655   12896 command_runner.go:130] > Type=notify
	I0328 01:10:37.690714   12896 command_runner.go:130] > Restart=on-failure
	I0328 01:10:37.690714   12896 command_runner.go:130] > Environment=NO_PROXY=172.28.227.122
	I0328 01:10:37.690714   12896 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0328 01:10:37.690714   12896 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0328 01:10:37.690714   12896 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0328 01:10:37.690773   12896 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0328 01:10:37.690773   12896 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0328 01:10:37.690834   12896 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0328 01:10:37.690858   12896 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0328 01:10:37.690858   12896 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0328 01:10:37.690901   12896 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0328 01:10:37.690941   12896 command_runner.go:130] > ExecStart=
	I0328 01:10:37.690941   12896 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0328 01:10:37.690991   12896 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0328 01:10:37.690991   12896 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0328 01:10:37.690991   12896 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0328 01:10:37.691032   12896 command_runner.go:130] > LimitNOFILE=infinity
	I0328 01:10:37.691032   12896 command_runner.go:130] > LimitNPROC=infinity
	I0328 01:10:37.691032   12896 command_runner.go:130] > LimitCORE=infinity
	I0328 01:10:37.691032   12896 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0328 01:10:37.691032   12896 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0328 01:10:37.691032   12896 command_runner.go:130] > TasksMax=infinity
	I0328 01:10:37.691032   12896 command_runner.go:130] > TimeoutStartSec=0
	I0328 01:10:37.691032   12896 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0328 01:10:37.691032   12896 command_runner.go:130] > Delegate=yes
	I0328 01:10:37.691032   12896 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0328 01:10:37.691032   12896 command_runner.go:130] > KillMode=process
	I0328 01:10:37.691032   12896 command_runner.go:130] > [Install]
	I0328 01:10:37.691032   12896 command_runner.go:130] > WantedBy=multi-user.target
	I0328 01:10:37.706375   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:10:37.743400   12896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:10:37.794493   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:10:37.836165   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:10:37.877352   12896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 01:10:37.948858   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:10:37.974223   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:10:38.010021   12896 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0328 01:10:38.026944   12896 ssh_runner.go:195] Run: which cri-dockerd
	I0328 01:10:38.034618   12896 command_runner.go:130] > /usr/bin/cri-dockerd
	I0328 01:10:38.047224   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 01:10:38.068320   12896 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 01:10:38.113852   12896 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 01:10:38.336460   12896 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 01:10:38.548650   12896 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 01:10:38.548650   12896 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 01:10:38.596554   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:38.815813   12896 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 01:10:41.406770   12896 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5909397s)
	I0328 01:10:41.420594   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 01:10:41.462008   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:10:41.503003   12896 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 01:10:41.737252   12896 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 01:10:41.955580   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:42.178520   12896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 01:10:42.225995   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:10:42.267349   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:42.480977   12896 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 01:10:42.594216   12896 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 01:10:42.610833   12896 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 01:10:42.619819   12896 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0328 01:10:42.620010   12896 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 01:10:42.620010   12896 command_runner.go:130] > Device: 0,22	Inode: 898         Links: 1
	I0328 01:10:42.620010   12896 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0328 01:10:42.620010   12896 command_runner.go:130] > Access: 2024-03-28 01:10:42.513090605 +0000
	I0328 01:10:42.620010   12896 command_runner.go:130] > Modify: 2024-03-28 01:10:42.513090605 +0000
	I0328 01:10:42.620079   12896 command_runner.go:130] > Change: 2024-03-28 01:10:42.516090590 +0000
	I0328 01:10:42.620079   12896 command_runner.go:130] >  Birth: -
	I0328 01:10:42.620187   12896 start.go:562] Will wait 60s for crictl version
	I0328 01:10:42.637153   12896 ssh_runner.go:195] Run: which crictl
	I0328 01:10:42.646942   12896 command_runner.go:130] > /usr/bin/crictl
	I0328 01:10:42.663361   12896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:10:42.747823   12896 command_runner.go:130] > Version:  0.1.0
	I0328 01:10:42.747823   12896 command_runner.go:130] > RuntimeName:  docker
	I0328 01:10:42.747823   12896 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0328 01:10:42.747823   12896 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 01:10:42.747823   12896 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 01:10:42.757810   12896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:10:42.791670   12896 command_runner.go:130] > 26.0.0
	I0328 01:10:42.801651   12896 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:10:42.834952   12896 command_runner.go:130] > 26.0.0
	I0328 01:10:42.840591   12896 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 01:10:42.842875   12896 out.go:177]   - env NO_PROXY=172.28.227.122
	I0328 01:10:42.845087   12896 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 01:10:42.849713   12896 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 01:10:42.849713   12896 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 01:10:42.849713   12896 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 01:10:42.849713   12896 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 01:10:42.852254   12896 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 01:10:42.852254   12896 ip.go:210] interface addr: 172.28.224.1/20
	I0328 01:10:42.867462   12896 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 01:10:42.874551   12896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:10:42.897479   12896 mustload.go:65] Loading cluster: multinode-240000
	I0328 01:10:42.898065   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:10:42.898790   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:10:45.186247   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:45.186484   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:45.186545   12896 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:10:45.186545   12896 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000 for IP: 172.28.230.250
	I0328 01:10:45.187125   12896 certs.go:194] generating shared ca certs ...
	I0328 01:10:45.187125   12896 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:10:45.187217   12896 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 01:10:45.187933   12896 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 01:10:45.188176   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 01:10:45.188386   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 01:10:45.188635   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 01:10:45.188719   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 01:10:45.189312   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 01:10:45.189365   12896 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 01:10:45.189365   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 01:10:45.189982   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 01:10:45.190232   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 01:10:45.190495   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 01:10:45.190782   12896 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 01:10:45.190782   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 01:10:45.191398   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 01:10:45.191575   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:10:45.191712   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:10:45.243491   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 01:10:45.294469   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:10:45.344319   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 01:10:45.392929   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 01:10:45.458300   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 01:10:45.523943   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:10:45.607180   12896 ssh_runner.go:195] Run: openssl version
	I0328 01:10:45.616469   12896 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 01:10:45.630726   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 01:10:45.667681   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 01:10:45.676527   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:10:45.676732   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:10:45.689814   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 01:10:45.700148   12896 command_runner.go:130] > 51391683
	I0328 01:10:45.712816   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 01:10:45.750436   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 01:10:45.786734   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 01:10:45.793850   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:10:45.793850   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:10:45.806757   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 01:10:45.820443   12896 command_runner.go:130] > 3ec20f2e
	I0328 01:10:45.833783   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:10:45.868657   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:10:45.902476   12896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:10:45.910322   12896 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:10:45.910322   12896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:10:45.923152   12896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:10:45.932541   12896 command_runner.go:130] > b5213941
	I0328 01:10:45.945309   12896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:10:45.977161   12896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:10:45.983811   12896 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 01:10:45.983811   12896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0328 01:10:45.984347   12896 kubeadm.go:928] updating node {m02 172.28.230.250 8443 v1.29.3 docker false true} ...
	I0328 01:10:45.984347   12896 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-240000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.230.250
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:10:45.996695   12896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:10:46.015962   12896 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	I0328 01:10:46.015962   12896 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0328 01:10:46.029132   12896 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0328 01:10:46.050665   12896 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0328 01:10:46.050665   12896 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0328 01:10:46.050991   12896 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0328 01:10:46.050991   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 01:10:46.050991   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 01:10:46.069918   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:10:46.069918   12896 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0328 01:10:46.070294   12896 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0328 01:10:46.102689   12896 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 01:10:46.102752   12896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0328 01:10:46.102689   12896 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 01:10:46.102878   12896 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 01:10:46.102955   12896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0328 01:10:46.102955   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0328 01:10:46.103108   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0328 01:10:46.117902   12896 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0328 01:10:46.199313   12896 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 01:10:46.199313   12896 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0328 01:10:46.199996   12896 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0328 01:10:47.489066   12896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0328 01:10:47.510096   12896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0328 01:10:47.546113   12896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:10:47.597027   12896 ssh_runner.go:195] Run: grep 172.28.227.122	control-plane.minikube.internal$ /etc/hosts
	I0328 01:10:47.603914   12896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.227.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:10:47.639293   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:47.874434   12896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:10:47.909178   12896 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:10:47.910243   12896 start.go:316] joinCluster: &{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:10:47.910243   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0328 01:10:47.910243   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:10:50.311868   12896 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:10:50.311868   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:50.311868   12896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:10:53.118354   12896 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:10:53.119141   12896 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:10:53.119615   12896 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:10:53.338587   12896 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 21uc52.ogu77ij32bw2ro8s --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a 
	I0328 01:10:53.338587   12896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.4283078s)
	I0328 01:10:53.338587   12896 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0328 01:10:53.339594   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 21uc52.ogu77ij32bw2ro8s --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-240000-m02"
	I0328 01:10:53.598342   12896 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0328 01:10:55.467765   12896 command_runner.go:130] > [preflight] Running pre-flight checks
	I0328 01:10:55.467765   12896 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0328 01:10:55.467765   12896 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0328 01:10:55.467922   12896 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:10:55.467979   12896 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:10:55.467979   12896 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0328 01:10:55.467979   12896 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0328 01:10:55.467979   12896 command_runner.go:130] > This node has joined the cluster:
	I0328 01:10:55.467979   12896 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0328 01:10:55.467979   12896 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0328 01:10:55.467979   12896 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0328 01:10:55.468065   12896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 21uc52.ogu77ij32bw2ro8s --discovery-token-ca-cert-hash sha256:0ddd52f56960165cc9115f65779af061c85c9b2eafbef06ee2128eb63ce54d7a --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-240000-m02": (2.1284564s)
	I0328 01:10:55.468065   12896 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0328 01:10:55.736310   12896 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0328 01:10:55.959845   12896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-240000-m02 minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=multinode-240000 minikube.k8s.io/primary=false
	I0328 01:10:56.099897   12896 command_runner.go:130] > node/multinode-240000-m02 labeled
	I0328 01:10:56.099897   12896 start.go:318] duration metric: took 8.1895991s to joinCluster
	I0328 01:10:56.099897   12896 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0328 01:10:56.102903   12896 out.go:177] * Verifying Kubernetes components...
	I0328 01:10:56.100892   12896 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:10:56.117873   12896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:10:56.347443   12896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:10:56.374278   12896 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:10:56.375105   12896 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.227.122:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:10:56.376012   12896 node_ready.go:35] waiting up to 6m0s for node "multinode-240000-m02" to be "Ready" ...
	I0328 01:10:56.376179   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:56.376236   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:56.376276   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:56.376276   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:56.389332   12896 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0328 01:10:56.389332   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:56.389332   12896 round_trippers.go:580]     Audit-Id: c7861ca2-60cc-4dd0-8442-c247ac90f5e7
	I0328 01:10:56.389332   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:56.389332   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:56.389483   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:56.389483   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:56.389483   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:56.389483   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:56 GMT
	I0328 01:10:56.389581   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:56.879368   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:56.879368   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:56.879496   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:56.879496   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:56.885000   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:10:56.885000   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:56.885000   12896 round_trippers.go:580]     Audit-Id: 6783a7ee-c5ce-472a-b9df-1400fe8191db
	I0328 01:10:56.885000   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:56.885000   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:56.885000   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:56.885000   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:56.885980   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:56.885980   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:56 GMT
	I0328 01:10:56.886030   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:57.380449   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:57.380510   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:57.380572   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:57.380572   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:57.384876   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:10:57.385078   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:57.385078   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:57.385078   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:57.385078   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:57.385078   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:57.385078   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:57.385078   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:57 GMT
	I0328 01:10:57.385078   12896 round_trippers.go:580]     Audit-Id: 915dae45-3982-4e35-9f06-30ffc62174f6
	I0328 01:10:57.385160   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:57.888024   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:57.888024   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:57.888024   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:57.888024   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:57.893898   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:10:57.893898   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:57.893898   12896 round_trippers.go:580]     Audit-Id: ea55b8e7-21e1-4878-abbf-89384aa36945
	I0328 01:10:57.893898   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:57.893898   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:57.893898   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:57.893898   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:57.893898   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:57.894005   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:57 GMT
	I0328 01:10:57.894181   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:58.381692   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:58.381775   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:58.381775   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:58.381775   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:58.385149   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:10:58.386167   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:58.386167   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:58.386200   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:58.386200   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:58 GMT
	I0328 01:10:58.386200   12896 round_trippers.go:580]     Audit-Id: a393e65d-0fa7-4735-a5ea-5941c18587e1
	I0328 01:10:58.386200   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:58.386200   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:58.386243   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:58.386243   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:58.386243   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:10:58.890756   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:58.890756   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:58.890756   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:58.890756   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:58.895347   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:10:58.895810   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:58.895810   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:58 GMT
	I0328 01:10:58.895810   12896 round_trippers.go:580]     Audit-Id: 0268feb2-19b0-4cfa-aa50-18828a468adf
	I0328 01:10:58.895810   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:58.895810   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:58.895810   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:58.895810   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:58.895810   12896 round_trippers.go:580]     Content-Length: 3928
	I0328 01:10:58.896022   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"629","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 2904 chars]
	I0328 01:10:59.381828   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:59.381910   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:59.381910   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:59.381910   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:59.500819   12896 round_trippers.go:574] Response Status: 200 OK in 118 milliseconds
	I0328 01:10:59.501268   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:59.501268   12896 round_trippers.go:580]     Audit-Id: 09f57ed8-a828-4bda-84ca-09f939dec1f9
	I0328 01:10:59.501370   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:59.501370   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:59.501370   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:59.501370   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:59.501370   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:10:59.501370   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:59 GMT
	I0328 01:10:59.501370   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:10:59.891546   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:10:59.891546   12896 round_trippers.go:469] Request Headers:
	I0328 01:10:59.891546   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:10:59.891546   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:10:59.906125   12896 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0328 01:10:59.906212   12896 round_trippers.go:577] Response Headers:
	I0328 01:10:59.906212   12896 round_trippers.go:580]     Audit-Id: 909fb1a2-0a1d-441c-a7f7-66f0312a3c53
	I0328 01:10:59.906212   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:10:59.906212   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:10:59.906212   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:10:59.906212   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:10:59.906212   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:10:59.906212   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:10:59 GMT
	I0328 01:10:59.906212   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:00.378856   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:00.378974   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:00.378974   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:00.378974   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:00.383734   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:00.383734   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:00.383734   12896 round_trippers.go:580]     Audit-Id: eb76ac05-1b83-4760-bf2c-bae9d4ad5bbe
	I0328 01:11:00.383734   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:00.383734   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:00.383734   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:00.383734   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:00.383734   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:00.383734   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:00 GMT
	I0328 01:11:00.383734   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:00.886453   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:00.886621   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:00.886676   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:00.886676   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:00.890943   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:00.891790   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:00.891790   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:00.891790   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:00.891790   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:00.891849   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:00 GMT
	I0328 01:11:00.891849   12896 round_trippers.go:580]     Audit-Id: 3de65d1c-8db5-477f-a1e8-fcb45ac5b17a
	I0328 01:11:00.891849   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:00.891849   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:00.892046   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:00.892479   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:01.378280   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:01.378280   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:01.378280   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:01.378280   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:01.383695   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:11:01.383695   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:01.383695   12896 round_trippers.go:580]     Audit-Id: 862002fc-58a3-447b-aea5-ab00a25fc939
	I0328 01:11:01.383809   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:01.383809   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:01.383809   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:01.383929   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:01.383929   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:01.383994   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:01 GMT
	I0328 01:11:01.384143   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:01.885951   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:01.886048   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:01.886084   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:01.886084   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:01.889865   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:01.890565   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:01.890565   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:01.890565   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:01 GMT
	I0328 01:11:01.890565   12896 round_trippers.go:580]     Audit-Id: 320939e5-e0e1-4202-8879-06b5c22f3370
	I0328 01:11:01.890565   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:01.890565   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:01.890565   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:01.890565   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:01.890565   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:02.390968   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:02.390968   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:02.390968   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:02.390968   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:02.394530   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:02.394530   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:02.395351   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:02.395351   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:02.395351   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:02.395351   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:02.395351   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:02.395351   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:02 GMT
	I0328 01:11:02.395422   12896 round_trippers.go:580]     Audit-Id: 1ec6e027-8bcf-4fd2-bee7-333e3e83d8a1
	I0328 01:11:02.395422   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:02.883519   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:02.883519   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:02.883519   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:02.883519   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:02.888149   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:02.888149   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:02.888489   12896 round_trippers.go:580]     Audit-Id: 80f25a72-64f3-46ef-9b02-01cdb05e6843
	I0328 01:11:02.888489   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:02.888489   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:02.888489   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:02.888489   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:02.888489   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:02.888489   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:02 GMT
	I0328 01:11:02.888755   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:03.389450   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:03.389450   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:03.389450   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:03.389450   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:03.395979   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:11:03.395979   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:03.395979   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:03.395979   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:03.395979   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:03.395979   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:03.395979   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:03 GMT
	I0328 01:11:03.395979   12896 round_trippers.go:580]     Audit-Id: df062820-8f35-455d-bd7f-401af9021f74
	I0328 01:11:03.396078   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:03.396078   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:03.396552   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:03.879331   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:03.879331   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:03.879331   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:03.879331   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:03.884615   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:11:03.884615   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:03.884615   12896 round_trippers.go:580]     Audit-Id: b261b18f-5b84-4fb1-b49e-d1cd13fa6f8b
	I0328 01:11:03.884615   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:03.885524   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:03.885524   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:03.885524   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:03.885524   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:03.885524   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:03 GMT
	I0328 01:11:03.885758   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:04.386893   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:04.386893   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:04.386893   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:04.386893   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:04.391078   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:04.392039   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:04.392080   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:04.392080   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:04.392080   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:04.392123   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:04.392123   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:04.392123   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:04 GMT
	I0328 01:11:04.392123   12896 round_trippers.go:580]     Audit-Id: d5f98d87-33d4-405b-8fc9-e04be6cdd289
	I0328 01:11:04.392123   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:04.881877   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:04.881877   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:04.881877   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:04.881877   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:04.884183   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:11:04.884183   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:04.884183   12896 round_trippers.go:580]     Audit-Id: da6ffde9-2ce7-490c-bc6c-7546783e40c3
	I0328 01:11:04.884183   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:04.884183   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:04.884183   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:04.884183   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:04.884962   12896 round_trippers.go:580]     Content-Length: 4037
	I0328 01:11:04.884962   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:04 GMT
	I0328 01:11:04.885080   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"634","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3013 chars]
	I0328 01:11:05.390347   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:05.390347   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:05.390347   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:05.390579   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:05.394388   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:05.394753   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:05.394753   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:05.394753   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:05.394753   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:05.394753   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:05 GMT
	I0328 01:11:05.394753   12896 round_trippers.go:580]     Audit-Id: 6b5a0f37-053c-4100-9947-c39dfdaebe6a
	I0328 01:11:05.394753   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:05.394931   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:05.885127   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:05.885127   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:05.885127   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:05.885392   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:05.889395   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:05.889395   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:05.889395   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:05.890270   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:05.890270   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:05 GMT
	I0328 01:11:05.890270   12896 round_trippers.go:580]     Audit-Id: e221cbc9-1432-444b-b632-570bf7541ce3
	I0328 01:11:05.890270   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:05.890270   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:05.890507   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:05.890507   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:06.385214   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:06.385214   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:06.385214   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:06.385214   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:06.450796   12896 round_trippers.go:574] Response Status: 200 OK in 65 milliseconds
	I0328 01:11:06.450796   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:06.450796   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:06.450796   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:06.450796   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:06.450796   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:06.450796   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:06 GMT
	I0328 01:11:06.450796   12896 round_trippers.go:580]     Audit-Id: 668cbdcc-3e28-4c90-8a8d-20e8b51c935c
	I0328 01:11:06.450796   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:06.890600   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:06.890688   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:06.890688   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:06.890688   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:06.894159   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:06.894800   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:06.894800   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:06.894800   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:06.894800   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:06 GMT
	I0328 01:11:06.894800   12896 round_trippers.go:580]     Audit-Id: 2f0454ea-2601-4fdb-8f32-95b3d0569f62
	I0328 01:11:06.894800   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:06.894800   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:06.894800   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:07.384222   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:07.384222   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:07.384222   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:07.384222   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:07.387234   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:07.387234   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:07.387234   12896 round_trippers.go:580]     Audit-Id: ec80b807-ead5-411f-ac46-0e02d743840d
	I0328 01:11:07.387234   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:07.387234   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:07.388252   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:07.388252   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:07.388252   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:07 GMT
	I0328 01:11:07.388252   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:07.889559   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:07.889559   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:07.889559   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:07.889559   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:07.894955   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:11:07.894955   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:07.894955   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:07.894955   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:07.894955   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:07.894955   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:07 GMT
	I0328 01:11:07.894955   12896 round_trippers.go:580]     Audit-Id: 4815f1e6-276f-4f4b-8e8b-058b6267fa59
	I0328 01:11:07.894955   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:07.895298   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:07.895857   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:08.382971   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:08.382971   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:08.382971   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:08.382971   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:08.386977   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:08.386977   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:08.386977   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:08.386977   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:08.386977   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:08.386977   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:08 GMT
	I0328 01:11:08.386977   12896 round_trippers.go:580]     Audit-Id: d0bee71b-5626-48cc-9e1c-50c7d8902b2b
	I0328 01:11:08.386977   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:08.388115   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:08.890030   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:08.890030   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:08.890030   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:08.890030   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:08.893950   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:08.893950   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:08.893950   12896 round_trippers.go:580]     Audit-Id: a427c3da-2df9-47f1-8944-7a2cc4e5789b
	I0328 01:11:08.893950   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:08.893950   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:08.893950   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:08.893950   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:08.893950   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:08 GMT
	I0328 01:11:08.894959   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:09.384567   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:09.384567   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:09.384567   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:09.384567   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:09.388723   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:09.388723   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:09.388723   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:09.388723   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:09.388723   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:09.388723   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:09.388723   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:09 GMT
	I0328 01:11:09.388723   12896 round_trippers.go:580]     Audit-Id: f68e7e52-4be0-4b9e-8c02-19e7f3ed04e0
	I0328 01:11:09.389721   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:09.878310   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:09.878310   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:09.878310   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:09.878310   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:09.883088   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:09.883498   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:09.883498   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:09.883498   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:09.883498   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:09.883498   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:09 GMT
	I0328 01:11:09.883498   12896 round_trippers.go:580]     Audit-Id: ed93e9fc-13c4-4ba4-bc6b-50dfc6c046de
	I0328 01:11:09.883498   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:09.883498   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:10.386375   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:10.386375   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:10.386462   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:10.386462   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:10.390901   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:10.390958   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:10.390958   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:10.390958   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:10.390958   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:10.390958   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:10 GMT
	I0328 01:11:10.391049   12896 round_trippers.go:580]     Audit-Id: d2c4e099-6fa1-4881-a132-325340ebd98d
	I0328 01:11:10.391049   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:10.391166   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:10.391868   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:10.876403   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:10.876403   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:10.876403   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:10.876403   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:10.880087   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:10.880087   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:10.880795   12896 round_trippers.go:580]     Audit-Id: 187bc5ce-eaa6-4f39-ab97-57333ea74cb3
	I0328 01:11:10.880795   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:10.880795   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:10.880795   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:10.880795   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:10.880795   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:10 GMT
	I0328 01:11:10.881179   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:11.383965   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:11.383965   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:11.383965   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:11.383965   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:11.387543   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:11.387543   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:11.387543   12896 round_trippers.go:580]     Audit-Id: 173e2cb5-4f2d-4dbe-8781-0ccc546b0920
	I0328 01:11:11.387543   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:11.387543   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:11.387543   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:11.387543   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:11.387543   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:11 GMT
	I0328 01:11:11.388703   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:11.886287   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:11.886287   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:11.886287   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:11.886287   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:11.890073   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:11.890073   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:11.890073   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:11 GMT
	I0328 01:11:11.890073   12896 round_trippers.go:580]     Audit-Id: f9ec19c7-109c-4312-b504-887fa5870fb3
	I0328 01:11:11.890073   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:11.890073   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:11.890073   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:11.890073   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:11.891415   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:12.392591   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:12.392591   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:12.392591   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:12.392591   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:12.397012   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:12.397055   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:12.397055   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:12.397092   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:12.397092   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:12.397092   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:12 GMT
	I0328 01:11:12.397092   12896 round_trippers.go:580]     Audit-Id: 76f684a1-dede-4bf0-809e-3ace7f92b2cc
	I0328 01:11:12.397092   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:12.397344   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:12.397981   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:12.884689   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:12.884689   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:12.884689   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:12.884689   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:12.888859   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:12.888859   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:12.888859   12896 round_trippers.go:580]     Audit-Id: 86778446-c6c2-4afe-a814-768a8319eba2
	I0328 01:11:12.888859   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:12.889679   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:12.889679   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:12.889679   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:12.889679   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:12 GMT
	I0328 01:11:12.889907   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:13.388084   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:13.388196   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:13.388196   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:13.388196   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:13.391533   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:13.391533   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:13.391960   12896 round_trippers.go:580]     Audit-Id: c1fcc7d5-1f9d-416a-ac91-3e385a6243e5
	I0328 01:11:13.391960   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:13.391960   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:13.391960   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:13.391960   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:13.391960   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:13 GMT
	I0328 01:11:13.392249   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:13.886918   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:13.886918   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:13.886918   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:13.886918   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:13.892569   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:11:13.892569   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:13.892569   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:13.892569   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:13.892569   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:13 GMT
	I0328 01:11:13.892569   12896 round_trippers.go:580]     Audit-Id: b342eb0d-d505-4b7d-ae1c-f4f31504622d
	I0328 01:11:13.892569   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:13.892569   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:13.892569   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:14.378885   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:14.378963   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:14.379035   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:14.379035   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:14.383405   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:14.383582   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:14.383582   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:14.383582   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:14.383582   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:14.383582   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:14.383582   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:14 GMT
	I0328 01:11:14.383582   12896 round_trippers.go:580]     Audit-Id: e1891720-6dd8-4109-97d8-9b2aaddff3c6
	I0328 01:11:14.383873   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:14.883642   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:14.883642   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:14.883642   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:14.883642   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:14.888518   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:14.888518   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:14.888518   12896 round_trippers.go:580]     Audit-Id: f83c238a-4564-497d-b21e-da3b4373f059
	I0328 01:11:14.888518   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:14.888518   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:14.888518   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:14.888518   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:14.888518   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:14 GMT
	I0328 01:11:14.889824   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:14.890398   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:15.384510   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:15.384633   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:15.384633   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:15.384633   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:15.388106   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:15.388106   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:15.388106   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:15.388106   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:15.388504   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:15.388504   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:15.388504   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:15 GMT
	I0328 01:11:15.388504   12896 round_trippers.go:580]     Audit-Id: a9a90a43-fb6a-45f8-b920-7654677d8776
	I0328 01:11:15.388618   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:15.881168   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:15.881356   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:15.881356   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:15.881356   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:15.885171   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:15.885171   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:15.885171   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:15.885171   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:15.885171   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:15 GMT
	I0328 01:11:15.885171   12896 round_trippers.go:580]     Audit-Id: 414155ba-99db-4a76-8c87-a7334a09be02
	I0328 01:11:15.885171   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:15.885171   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:15.885709   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:16.385018   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:16.385197   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:16.385197   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:16.385197   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:16.388564   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:16.388564   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:16.388564   12896 round_trippers.go:580]     Audit-Id: 7aee1b4f-9cbd-405a-b2bf-dc3585d1788d
	I0328 01:11:16.388564   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:16.388564   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:16.388564   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:16.388564   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:16.388564   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:16 GMT
	I0328 01:11:16.389679   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:16.887088   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:16.887088   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:16.887088   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:16.887088   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:16.891788   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:16.891788   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:16.891788   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:16.891788   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:16.891788   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:16 GMT
	I0328 01:11:16.891788   12896 round_trippers.go:580]     Audit-Id: 06be92e1-863a-4edb-8c67-e8e122524dfa
	I0328 01:11:16.891788   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:16.891788   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:16.891788   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:16.892481   12896 node_ready.go:53] node "multinode-240000-m02" has status "Ready":"False"
	I0328 01:11:17.390260   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:17.390260   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.390260   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.390260   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.394269   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:17.394269   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.394269   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.394269   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.394269   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.394269   12896 round_trippers.go:580]     Audit-Id: c89273b9-0f73-4d4c-afcf-9fcf870ca0b4
	I0328 01:11:17.394865   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.394865   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.395018   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"643","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3405 chars]
	I0328 01:11:17.881873   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:17.881950   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.881950   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.881950   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.886314   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:17.886314   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.886314   12896 round_trippers.go:580]     Audit-Id: 5b4088b1-11a0-4616-b62e-be6f1d0abba6
	I0328 01:11:17.886314   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.886314   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.886314   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.886314   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.886314   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.887433   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"668","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3271 chars]
	I0328 01:11:17.887787   12896 node_ready.go:49] node "multinode-240000-m02" has status "Ready":"True"
	I0328 01:11:17.887919   12896 node_ready.go:38] duration metric: took 21.5117149s for node "multinode-240000-m02" to be "Ready" ...
	I0328 01:11:17.887919   12896 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:11:17.888332   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods
	I0328 01:11:17.888369   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.888369   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.888369   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.898255   12896 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 01:11:17.898595   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.898595   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.898595   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.898595   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.898595   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.898595   12896 round_trippers.go:580]     Audit-Id: 697765a9-d4e2-4b4d-b93e-34a20ae81818
	I0328 01:11:17.898595   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.901446   12896 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"668"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"456","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70484 chars]
	I0328 01:11:17.904880   12896 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.905045   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:11:17.905045   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.905120   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.905120   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.908775   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:17.908775   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.908775   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.908775   12896 round_trippers.go:580]     Audit-Id: fe8daf16-1899-46fb-bf86-3a9acd8261bd
	I0328 01:11:17.908775   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.908775   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.908775   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.908775   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.910080   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"456","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6580 chars]
	I0328 01:11:17.911160   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:17.911217   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.911217   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.911217   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.917260   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:11:17.917360   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.917416   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.917416   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.917416   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.917416   12896 round_trippers.go:580]     Audit-Id: 1ee8e9d2-8bed-49d3-b913-9d012a3170f4
	I0328 01:11:17.917416   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.917416   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.918152   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:17.918882   12896 pod_ready.go:92] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:17.918970   12896 pod_ready.go:81] duration metric: took 14.0166ms for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.918970   12896 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.919071   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:11:17.919071   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.919164   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.919164   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.922545   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:17.922545   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.922545   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.922545   12896 round_trippers.go:580]     Audit-Id: 6ca33fc8-45a4-4c61-978c-f224d061f42f
	I0328 01:11:17.922545   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.922545   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.922545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.922545   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.922545   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93","resourceVersion":"418","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.227.122:2379","kubernetes.io/config.hash":"3bf911dad00226d1456d6201aff35c8b","kubernetes.io/config.mirror":"3bf911dad00226d1456d6201aff35c8b","kubernetes.io/config.seen":"2024-03-28T01:07:31.458002457Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6170 chars]
	I0328 01:11:17.924255   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:17.924255   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.924255   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.924255   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.927504   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:17.927910   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.927910   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.927910   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.927910   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.927910   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.927910   12896 round_trippers.go:580]     Audit-Id: 2e402f1b-7824-4acd-9589-c2f1d4a5e0da
	I0328 01:11:17.928081   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.928327   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:17.928724   12896 pod_ready.go:92] pod "etcd-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:17.928724   12896 pod_ready.go:81] duration metric: took 9.7539ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.928785   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.928838   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:11:17.928910   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.928910   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.928910   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.931227   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:11:17.931227   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.931227   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.931227   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.931227   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.931752   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.931752   12896 round_trippers.go:580]     Audit-Id: 370393c8-dcbd-4b20-8f1c-c94f2e7f01fb
	I0328 01:11:17.931752   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.932006   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"7736298d-3898-4693-84bf-2311305bf52c","resourceVersion":"420","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.227.122:8443","kubernetes.io/config.hash":"08b85a8adf05b50d7739532a291175d4","kubernetes.io/config.mirror":"08b85a8adf05b50d7739532a291175d4","kubernetes.io/config.seen":"2024-03-28T01:07:31.458006857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I0328 01:11:17.932326   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:17.932326   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.932326   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.932326   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.935091   12896 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:11:17.935091   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.935091   12896 round_trippers.go:580]     Audit-Id: 911094be-92ad-4ddd-91dc-fecb6c7a71cc
	I0328 01:11:17.935091   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.935425   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.935425   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.935425   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.935425   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.935551   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:17.935899   12896 pod_ready.go:92] pod "kube-apiserver-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:17.936003   12896 pod_ready.go:81] duration metric: took 7.2179ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.936003   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.936003   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:11:17.936124   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.936124   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.936124   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.940453   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:17.940453   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.940453   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.940453   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.940453   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.940453   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.940453   12896 round_trippers.go:580]     Audit-Id: b01a47e5-b012-4619-b3c7-a9eaa1b0d2e4
	I0328 01:11:17.940453   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.941032   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"423","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I0328 01:11:17.942018   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:17.942018   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:17.942018   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:17.942080   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:17.945341   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:17.945341   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:17.945341   12896 round_trippers.go:580]     Audit-Id: 0ff18589-9a0b-41ca-9e36-2497a8f6cabd
	I0328 01:11:17.945341   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:17.945341   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:17.945341   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:17.945341   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:17.945341   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:17 GMT
	I0328 01:11:17.945872   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:17.946457   12896 pod_ready.go:92] pod "kube-controller-manager-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:17.946457   12896 pod_ready.go:81] duration metric: took 10.4533ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:17.946457   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:18.086087   12896 request.go:629] Waited for 139.4304ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:11:18.086274   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:11:18.086274   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:18.086398   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:18.086398   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:18.090201   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:18.090201   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:18.090201   12896 round_trippers.go:580]     Audit-Id: 3a0a2378-0ec0-4699-bc49-b122e82e79b2
	I0328 01:11:18.090201   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:18.090201   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:18.090959   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:18.090959   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:18.090959   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:18 GMT
	I0328 01:11:18.091422   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"413","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5833 chars]
	I0328 01:11:18.290319   12896 request.go:629] Waited for 198.1508ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:18.290380   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:18.290380   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:18.290380   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:18.290380   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:18.294953   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:18.294953   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:18.294953   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:18.295977   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:18.296002   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:18.296002   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:18.296002   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:18 GMT
	I0328 01:11:18.296002   12896 round_trippers.go:580]     Audit-Id: 6edbe134-09f8-4e42-96d5-f5d85c184b72
	I0328 01:11:18.296312   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:18.297311   12896 pod_ready.go:92] pod "kube-proxy-47rqg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:18.297311   12896 pod_ready.go:81] duration metric: took 350.8519ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:18.297400   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:18.496700   12896 request.go:629] Waited for 199.1766ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:11:18.496835   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:11:18.496835   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:18.496835   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:18.496835   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:18.503133   12896 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:11:18.503133   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:18.503133   12896 round_trippers.go:580]     Audit-Id: c0b57796-c48f-473c-9b81-76f7e3d10f2a
	I0328 01:11:18.503133   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:18.503133   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:18.503133   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:18.503246   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:18.503246   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:18 GMT
	I0328 01:11:18.503495   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t88gz","generateName":"kube-proxy-","namespace":"kube-system","uid":"695603ac-ab24-4206-9728-342b2af018e4","resourceVersion":"650","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0328 01:11:18.686879   12896 request.go:629] Waited for 182.5915ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:18.687133   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:11:18.687133   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:18.687133   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:18.687133   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:18.691170   12896 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:11:18.691566   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:18.691566   12896 round_trippers.go:580]     Audit-Id: 71d0dd42-a600-458d-b10d-3307fb1b45aa
	I0328 01:11:18.691566   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:18.691566   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:18.691566   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:18.691566   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:18.691566   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:18 GMT
	I0328 01:11:18.691665   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"668","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metad
ata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-manag [truncated 3271 chars]
	I0328 01:11:18.692321   12896 pod_ready.go:92] pod "kube-proxy-t88gz" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:18.692321   12896 pod_ready.go:81] duration metric: took 394.9179ms for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:18.692321   12896 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:18.889247   12896 request.go:629] Waited for 196.6953ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:11:18.889504   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:11:18.889504   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:18.889504   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:18.889504   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:18.895289   12896 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:11:18.895289   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:18.895289   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:18.895289   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:18.895289   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:18.895289   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:18.895289   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:18 GMT
	I0328 01:11:18.895289   12896 round_trippers.go:580]     Audit-Id: 5b9e378b-c01c-4640-98a8-364893df9164
	I0328 01:11:18.895289   12896 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"419","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I0328 01:11:19.091285   12896 request.go:629] Waited for 194.9007ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:19.091558   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes/multinode-240000
	I0328 01:11:19.091558   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:19.091638   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:19.091638   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:19.095873   12896 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:11:19.095873   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:19.095873   12896 round_trippers.go:580]     Audit-Id: 60a2c2f9-c717-471a-90c2-276c0a80bae5
	I0328 01:11:19.095873   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:19.095873   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:19.095873   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:19.095873   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:19.095873   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:19 GMT
	I0328 01:11:19.095873   12896 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Updat
e","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Fields [truncated 4967 chars]
	I0328 01:11:19.096663   12896 pod_ready.go:92] pod "kube-scheduler-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:11:19.096663   12896 pod_ready.go:81] duration metric: took 404.3393ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:11:19.096738   12896 pod_ready.go:38] duration metric: took 1.20881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:11:19.096738   12896 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:11:19.110598   12896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:11:19.139285   12896 system_svc.go:56] duration metric: took 42.5476ms WaitForService to wait for kubelet
	I0328 01:11:19.139354   12896 kubeadm.go:576] duration metric: took 23.0383072s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:11:19.139354   12896 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:11:19.283758   12896 request.go:629] Waited for 144.2306ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.227.122:8443/api/v1/nodes
	I0328 01:11:19.283758   12896 round_trippers.go:463] GET https://172.28.227.122:8443/api/v1/nodes
	I0328 01:11:19.284047   12896 round_trippers.go:469] Request Headers:
	I0328 01:11:19.284047   12896 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:11:19.284047   12896 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:11:19.293147   12896 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 01:11:19.293147   12896 round_trippers.go:577] Response Headers:
	I0328 01:11:19.293147   12896 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:11:19.293147   12896 round_trippers.go:580]     Content-Type: application/json
	I0328 01:11:19.293147   12896 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:11:19.293147   12896 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:11:19.293147   12896 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:11:19 GMT
	I0328 01:11:19.293147   12896 round_trippers.go:580]     Audit-Id: 505ee852-59b7-4b9a-a6c4-ed06356c1fa5
	I0328 01:11:19.294090   12896 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"670"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"463","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 9163 chars]
	I0328 01:11:19.295160   12896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:11:19.295160   12896 node_conditions.go:123] node cpu capacity is 2
	I0328 01:11:19.295160   12896 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:11:19.295160   12896 node_conditions.go:123] node cpu capacity is 2
	I0328 01:11:19.295160   12896 node_conditions.go:105] duration metric: took 155.8049ms to run NodePressure ...
	I0328 01:11:19.295160   12896 start.go:240] waiting for startup goroutines ...
	I0328 01:11:19.295160   12896 start.go:254] writing updated cluster config ...
	I0328 01:11:19.311288   12896 ssh_runner.go:195] Run: rm -f paused
	I0328 01:11:19.467981   12896 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0328 01:11:19.485377   12896 out.go:177] * Done! kubectl is now configured to use "multinode-240000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.502768720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.507789116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.507899916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.507913516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.508204916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 cri-dockerd[1234]: time="2024-03-28T01:07:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379/resolv.conf as [nameserver 172.28.224.1]"
	Mar 28 01:07:58 multinode-240000 cri-dockerd[1234]: time="2024-03-28T01:07:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a/resolv.conf as [nameserver 172.28.224.1]"
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.925281445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.925568246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.925591946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.925723046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.964386205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.964479005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.964493805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:07:58 multinode-240000 dockerd[1349]: time="2024-03-28T01:07:58.964876205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:11:46 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:46.944533033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 01:11:46 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:46.945451837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 01:11:46 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:46.945568838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:11:46 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:46.945691738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:11:47 multinode-240000 cri-dockerd[1234]: time="2024-03-28T01:11:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Mar 28 01:11:48 multinode-240000 cri-dockerd[1234]: time="2024-03-28T01:11:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Mar 28 01:11:48 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:48.623810953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 28 01:11:48 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:48.623915654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 28 01:11:48 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:48.623936854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 28 01:11:48 multinode-240000 dockerd[1349]: time="2024-03-28T01:11:48.624080854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   51 seconds ago      Running             busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	29e516c918ef4       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	d02996b2d57bf       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   28426f4e9df5e       storage-provisioner
	dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	bb0b3c5422645       a1d263b5dc5b0                                                                                         4 minutes ago       Running             kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	1aa05268773e4       6052a25da3f97                                                                                         5 minutes ago       Running             kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	7061eab02790d       8c390d98f50c0                                                                                         5 minutes ago       Running             kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	a01212226d03a       39f995c9f1996                                                                                         5 minutes ago       Running             kube-apiserver            0                   ec77663c174f9       kube-apiserver-multinode-240000
	66f15076d3443       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   20ff2ecb3a6db       etcd-multinode-240000
	
	
	==> coredns [29e516c918ef] <==
	[INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	[INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	[INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	[INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	[INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	[INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	[INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	[INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	[INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	[INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	[INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	[INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	[INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	[INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	[INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	[INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	[INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	[INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	[INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	[INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	[INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	[INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	[INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	[INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	
	
	==> describe nodes <==
	Name:               multinode-240000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-240000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-240000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-240000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:12:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:12:08 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:12:08 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:12:08 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:12:08 +0000   Thu, 28 Mar 2024 01:07:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.227.122
	  Hostname:    multinode-240000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 77e4288a9d1a4a8591b02d6f25cea92a
	  System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	  Boot ID:                    5e7401b1-76d8-4e1e-9c46-25fb1a0921bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ct428                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 coredns-76f75df574-776ph                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m55s
	  kube-system                 etcd-multinode-240000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m8s
	  kube-system                 kindnet-rwghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m55s
	  kube-system                 kube-apiserver-multinode-240000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-controller-manager-multinode-240000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-proxy-47rqg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-scheduler-multinode-240000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m8s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m8s                   kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s                   kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s                   kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m56s                  node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	  Normal  NodeReady                4m42s                  kubelet          Node multinode-240000 status is now: NodeReady
	
	
	Name:               multinode-240000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-240000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-240000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-240000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:12:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:11:56 +0000   Thu, 28 Mar 2024 01:10:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:11:56 +0000   Thu, 28 Mar 2024 01:10:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:11:56 +0000   Thu, 28 Mar 2024 01:10:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:11:56 +0000   Thu, 28 Mar 2024 01:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.230.250
	  Hostname:    multinode-240000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	  System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	  Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zgwm4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kindnet-hsnfl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      105s
	  kube-system                 kube-proxy-t88gz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 92s                  kube-proxy       
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x2 over 105s)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x2 over 105s)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                 node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	  Normal  NodeReady                82s                  kubelet          Node multinode-240000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar28 01:06] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.218444] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +33.049654] systemd-fstab-generator[949]: Ignoring "noauto" option for root device
	[  +0.128381] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.607687] systemd-fstab-generator[988]: Ignoring "noauto" option for root device
	[  +0.242885] systemd-fstab-generator[1000]: Ignoring "noauto" option for root device
	[  +0.282520] systemd-fstab-generator[1014]: Ignoring "noauto" option for root device
	[  +2.859298] systemd-fstab-generator[1187]: Ignoring "noauto" option for root device
	[  +0.222956] systemd-fstab-generator[1199]: Ignoring "noauto" option for root device
	[  +0.221614] systemd-fstab-generator[1211]: Ignoring "noauto" option for root device
	[  +0.309019] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[Mar28 01:07] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.125080] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.381272] systemd-fstab-generator[1531]: Ignoring "noauto" option for root device
	[  +7.529164] systemd-fstab-generator[1810]: Ignoring "noauto" option for root device
	[  +0.139934] kauditd_printk_skb: 73 callbacks suppressed
	[ +10.383663] systemd-fstab-generator[2851]: Ignoring "noauto" option for root device
	[  +0.162325] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.509491] systemd-fstab-generator[4429]: Ignoring "noauto" option for root device
	[  +0.243419] kauditd_printk_skb: 12 callbacks suppressed
	[  +2.858614] hrtimer: interrupt took 2207192 ns
	[  +4.408757] kauditd_printk_skb: 51 callbacks suppressed
	[Mar28 01:11] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [66f15076d344] <==
	{"level":"info","ts":"2024-03-28T01:08:13.466988Z","caller":"traceutil/trace.go:171","msg":"trace[421799255] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"134.584525ms","start":"2024-03-28T01:08:13.332354Z","end":"2024-03-28T01:08:13.466938Z","steps":["trace[421799255] 'process raft request'  (duration: 134.090116ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-28T01:10:59.511717Z","caller":"traceutil/trace.go:171","msg":"trace[1142585964] linearizableReadLoop","detail":"{readStateIndex:687; appliedIndex:686; }","duration":"436.315965ms","start":"2024-03-28T01:10:59.07538Z","end":"2024-03-28T01:10:59.511696Z","steps":["trace[1142585964] 'read index received'  (duration: 436.030563ms)","trace[1142585964] 'applied index is now lower than readState.Index'  (duration: 284.702µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T01:10:59.512275Z","caller":"traceutil/trace.go:171","msg":"trace[1348574610] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"449.31293ms","start":"2024-03-28T01:10:59.06295Z","end":"2024-03-28T01:10:59.512263Z","steps":["trace[1348574610] 'process raft request'  (duration: 448.519826ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.512853Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.062928Z","time spent":"449.45053ms","remote":"127.0.0.1:38378","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2840,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-240000-m02\" mod_revision:629 > success:<request_put:<key:\"/registry/minions/multinode-240000-m02\" value_size:2794 >> failure:<request_range:<key:\"/registry/minions/multinode-240000-m02\" > >"}
	{"level":"warn","ts":"2024-03-28T01:10:59.513197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.190089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-03-28T01:10:59.513717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.408175ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-03-28T01:10:59.513866Z","caller":"traceutil/trace.go:171","msg":"trace[81468098] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:634; }","duration":"438.609976ms","start":"2024-03-28T01:10:59.075243Z","end":"2024-03-28T01:10:59.513853Z","steps":["trace[81468098] 'agreement among raft nodes before linearized reading'  (duration: 438.411075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.514004Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.075212Z","time spent":"438.779477ms","remote":"127.0.0.1:38376","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-03-28T01:10:59.514572Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.593269ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-240000-m02\" ","response":"range_response_count:1 size:2855"}
	{"level":"info","ts":"2024-03-28T01:10:59.513364Z","caller":"traceutil/trace.go:171","msg":"trace[1662463706] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:634; }","duration":"300.39069ms","start":"2024-03-28T01:10:59.212956Z","end":"2024-03-28T01:10:59.513346Z","steps":["trace[1662463706] 'agreement among raft nodes before linearized reading'  (duration: 300.187689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.514793Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.212939Z","time spent":"301.841997ms","remote":"127.0.0.1:38250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-28T01:10:59.514682Z","caller":"traceutil/trace.go:171","msg":"trace[436798599] range","detail":"{range_begin:/registry/minions/multinode-240000-m02; range_end:; response_count:1; response_revision:634; }","duration":"114.72947ms","start":"2024-03-28T01:10:59.399942Z","end":"2024-03-28T01:10:59.514671Z","steps":["trace[436798599] 'agreement among raft nodes before linearized reading'  (duration: 114.599669ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.915757Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.339522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5931397400809275209 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:632 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-28T01:10:59.916071Z","caller":"traceutil/trace.go:171","msg":"trace[1709706881] linearizableReadLoop","detail":"{readStateIndex:688; appliedIndex:687; }","duration":"375.217462ms","start":"2024-03-28T01:10:59.54084Z","end":"2024-03-28T01:10:59.916057Z","steps":["trace[1709706881] 'read index received'  (duration: 128.391337ms)","trace[1709706881] 'applied index is now lower than readState.Index'  (duration: 246.824925ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-28T01:10:59.916423Z","caller":"traceutil/trace.go:171","msg":"trace[989720432] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"393.609753ms","start":"2024-03-28T01:10:59.522803Z","end":"2024-03-28T01:10:59.916413Z","steps":["trace[989720432] 'process raft request'  (duration: 146.398526ms)","trace[989720432] 'compare'  (duration: 246.268522ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-28T01:10:59.916691Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.522786Z","time spent":"393.875354ms","remote":"127.0.0.1:38376","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:632 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-28T01:10:59.917262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.433568ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-28T01:10:59.917435Z","caller":"traceutil/trace.go:171","msg":"trace[1161548713] range","detail":"{range_begin:/registry/replicasets/; range_end:/registry/replicasets0; response_count:0; response_revision:635; }","duration":"376.637769ms","start":"2024-03-28T01:10:59.540787Z","end":"2024-03-28T01:10:59.917425Z","steps":["trace[1161548713] 'agreement among raft nodes before linearized reading'  (duration: 376.392268ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.917691Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.540769Z","time spent":"376.91027ms","remote":"127.0.0.1:38694","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":31,"request content":"key:\"/registry/replicasets/\" range_end:\"/registry/replicasets0\" count_only:true "}
	{"level":"warn","ts":"2024-03-28T01:10:59.918051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.486723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.28.227.122\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-03-28T01:10:59.918129Z","caller":"traceutil/trace.go:171","msg":"trace[1446638395] range","detail":"{range_begin:/registry/masterleases/172.28.227.122; range_end:; response_count:1; response_revision:635; }","duration":"367.615624ms","start":"2024-03-28T01:10:59.550463Z","end":"2024-03-28T01:10:59.918079Z","steps":["trace[1446638395] 'agreement among raft nodes before linearized reading'  (duration: 367.512823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:10:59.918216Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-28T01:10:59.550448Z","time spent":"367.759125ms","remote":"127.0.0.1:38266","response type":"/etcdserverpb.KV/Range","request count":0,"request size":39,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/172.28.227.122\" "}
	{"level":"info","ts":"2024-03-28T01:11:12.167297Z","caller":"traceutil/trace.go:171","msg":"trace[713269765] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"102.816993ms","start":"2024-03-28T01:11:12.064463Z","end":"2024-03-28T01:11:12.16728Z","steps":["trace[713269765] 'process raft request'  (duration: 102.714092ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-28T01:11:49.318471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.652261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-28T01:11:49.319235Z","caller":"traceutil/trace.go:171","msg":"trace[1864750263] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:727; }","duration":"103.466864ms","start":"2024-03-28T01:11:49.215754Z","end":"2024-03-28T01:11:49.31922Z","steps":["trace[1864750263] 'range keys from in-memory index tree'  (duration: 102.568261ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:12:39 up 7 min,  0 users,  load average: 0.27, 0.33, 0.18
	Linux multinode-240000 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dc9808261b21] <==
	I0328 01:11:33.249582       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:11:43.264371       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:11:43.264401       1 main.go:227] handling current node
	I0328 01:11:43.264416       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:11:43.264423       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:11:53.279900       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:11:53.279951       1 main.go:227] handling current node
	I0328 01:11:53.279966       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:11:53.279973       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:12:03.293773       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:12:03.294600       1 main.go:227] handling current node
	I0328 01:12:03.294734       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:12:03.294749       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:12:13.306004       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:12:13.306142       1 main.go:227] handling current node
	I0328 01:12:13.306159       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:12:13.306171       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:12:23.323495       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:12:23.323643       1 main.go:227] handling current node
	I0328 01:12:23.323660       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:12:23.323670       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:12:33.333517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:12:33.333547       1 main.go:227] handling current node
	I0328 01:12:33.333562       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:12:33.333569       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a01212226d03] <==
	I0328 01:07:27.024509       1 controller.go:624] quota admission added evaluator for: namespaces
	I0328 01:07:27.032129       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:07:27.033651       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:07:27.034653       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0328 01:07:27.135384       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0328 01:07:27.344575       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:07:27.829964       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0328 01:07:27.839669       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0328 01:07:27.839686       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:07:29.295786       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:07:29.389935       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:07:29.549036       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0328 01:07:29.574617       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122]
	I0328 01:07:29.576490       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:07:29.591805       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:07:29.970638       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:07:31.308136       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:07:31.338349       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0328 01:07:31.361232       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:07:44.013739       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0328 01:07:44.261899       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0328 01:11:00.128366       1 trace.go:236] Trace[1100374427]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.28.227.122,type:*v1.Endpoints,resource:apiServerIPInfo (28-Mar-2024 01:10:59.549) (total time: 578ms):
	Trace[1100374427]: ---"initial value restored" 369ms (01:10:59.918)
	Trace[1100374427]: ---"Transaction prepared" 199ms (01:11:00.118)
	Trace[1100374427]: [578.646669ms] [578.646669ms] END
	
	
	==> kube-controller-manager [1aa05268773e] <==
	I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	
	
	==> kube-proxy [bb0b3c542264] <==
	I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7061eab02790] <==
	W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:08:31 multinode-240000 kubelet[2878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:09:31 multinode-240000 kubelet[2878]: E0328 01:09:31.665653    2878 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:09:31 multinode-240000 kubelet[2878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:09:31 multinode-240000 kubelet[2878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:09:31 multinode-240000 kubelet[2878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:09:31 multinode-240000 kubelet[2878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:10:31 multinode-240000 kubelet[2878]: E0328 01:10:31.664206    2878 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:10:31 multinode-240000 kubelet[2878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:10:31 multinode-240000 kubelet[2878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:10:31 multinode-240000 kubelet[2878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:10:31 multinode-240000 kubelet[2878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:11:31 multinode-240000 kubelet[2878]: E0328 01:11:31.665378    2878 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:11:31 multinode-240000 kubelet[2878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:11:31 multinode-240000 kubelet[2878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:11:31 multinode-240000 kubelet[2878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:11:31 multinode-240000 kubelet[2878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:11:46 multinode-240000 kubelet[2878]: I0328 01:11:46.331643    2878 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-776ph" podStartSLOduration=242.331590885 podStartE2EDuration="4m2.331590885s" podCreationTimestamp="2024-03-28 01:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:07:59.920958191 +0000 UTC m=+28.670261915" watchObservedRunningTime="2024-03-28 01:11:46.331590885 +0000 UTC m=+255.080894609"
	Mar 28 01:11:46 multinode-240000 kubelet[2878]: I0328 01:11:46.331886    2878 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	Mar 28 01:11:46 multinode-240000 kubelet[2878]: I0328 01:11:46.369342    2878 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86msg\" (UniqueName: \"kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg\") pod \"busybox-7fdf7869d9-ct428\" (UID: \"82be2bd2-ca76-4804-8e23-ebd40a434863\") " pod="default/busybox-7fdf7869d9-ct428"
	Mar 28 01:11:47 multinode-240000 kubelet[2878]: I0328 01:11:47.083297    2878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	Mar 28 01:12:31 multinode-240000 kubelet[2878]: E0328 01:12:31.665333    2878 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:12:31 multinode-240000 kubelet[2878]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:12:31 multinode-240000 kubelet[2878]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:12:31 multinode-240000 kubelet[2878]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:12:31 multinode-240000 kubelet[2878]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:12:31.002559    9304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-240000 -n multinode-240000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-240000 -n multinode-240000: (13.0504783s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-240000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (60.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (407.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-240000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-240000
E0328 01:28:29.039793   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-240000: (1m43.4463796s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-240000 --wait=true -v=8 --alsologtostderr
E0328 01:33:29.039980   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-240000 --wait=true -v=8 --alsologtostderr: exit status 1 (4m12.0473254s)

                                                
                                                
-- stdout --
	* [multinode-240000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-240000" primary control-plane node in "multinode-240000" cluster
	* Restarting existing hyperv VM for "multinode-240000" ...
	* Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-240000-m02" worker node in "multinode-240000" cluster
	* Restarting existing hyperv VM for "multinode-240000-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:30:00.224436    6044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0328 01:30:00.313275    6044 out.go:291] Setting OutFile to fd 972 ...
	I0328 01:30:00.313275    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:30:00.313275    6044 out.go:304] Setting ErrFile to fd 968...
	I0328 01:30:00.313275    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:30:00.337998    6044 out.go:298] Setting JSON to false
	I0328 01:30:00.341994    6044 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12061,"bootTime":1711577338,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0328 01:30:00.342153    6044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 01:30:00.458190    6044 out.go:177] * [multinode-240000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0328 01:30:00.607515    6044 notify.go:220] Checking for updates...
	I0328 01:30:00.653360    6044 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:30:00.766456    6044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:30:00.956146    6044 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0328 01:30:01.014359    6044 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:30:01.258189    6044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:30:01.322877    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:30:01.323187    6044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:30:07.308307    6044 out.go:177] * Using the hyperv driver based on existing profile
	I0328 01:30:07.316021    6044 start.go:297] selected driver: hyperv
	I0328 01:30:07.316898    6044 start.go:901] validating driver "hyperv" against &{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:30:07.316984    6044 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:30:07.376110    6044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:30:07.377440    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:30:07.377440    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:30:07.377673    6044 start.go:340] cluster config:
	{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:30:07.377673    6044 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:30:07.513634    6044 out.go:177] * Starting "multinode-240000" primary control-plane node in "multinode-240000" cluster
	I0328 01:30:07.670409    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:30:07.670830    6044 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0328 01:30:07.670906    6044 cache.go:56] Caching tarball of preloaded images
	I0328 01:30:07.671334    6044 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:30:07.671600    6044 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:30:07.671600    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:30:07.675183    6044 start.go:360] acquireMachinesLock for multinode-240000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:30:07.675393    6044 start.go:364] duration metric: took 210.3µs to acquireMachinesLock for "multinode-240000"
	I0328 01:30:07.675608    6044 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:30:07.675708    6044 fix.go:54] fixHost starting: 
	I0328 01:30:07.676667    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:10.633072    6044 main.go:141] libmachine: [stdout =====>] : Off
	
	I0328 01:30:10.633538    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:10.633538    6044 fix.go:112] recreateIfNeeded on multinode-240000: state=Stopped err=<nil>
	W0328 01:30:10.633538    6044 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:30:10.637851    6044 out.go:177] * Restarting existing hyperv VM for "multinode-240000" ...
	I0328 01:30:10.641170    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000
	I0328 01:30:13.842787    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:13.842787    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:13.842787    6044 main.go:141] libmachine: Waiting for host to start...
	I0328 01:30:13.843043    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:16.229995    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:16.229995    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:16.230332    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:18.893212    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:18.893212    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:19.908866    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:25.082635    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:25.083520    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:26.084474    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:31.181702    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:31.181702    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:32.189615    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:34.529122    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:34.529525    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:34.529525    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:37.218113    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:37.218113    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:38.223978    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:40.572558    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:40.572558    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:40.573122    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:43.307092    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:43.307092    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:43.309887    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:48.299861    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:48.299861    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:48.300290    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:30:48.303469    6044 machine.go:94] provisionDockerMachine start ...
	I0328 01:30:48.303469    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:50.561613    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:50.561613    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:50.562693    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:53.317669    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:53.317819    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:53.324574    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:30:53.325237    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:30:53.325237    6044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:30:53.466835    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:30:53.467015    6044 buildroot.go:166] provisioning hostname "multinode-240000"
	I0328 01:30:53.467099    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:55.689924    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:55.689924    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:55.690673    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:58.389933    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:58.389933    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:58.395412    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:30:58.396746    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:30:58.396888    6044 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-240000 && echo "multinode-240000" | sudo tee /etc/hostname
	I0328 01:30:58.564031    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-240000
	
	I0328 01:30:58.564031    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:00.811138    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:00.811368    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:00.811452    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:03.509452    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:03.509531    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:03.515796    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:03.516104    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:03.516104    6044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-240000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-240000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-240000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:31:03.670779    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:31:03.670779    6044 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 01:31:03.670779    6044 buildroot.go:174] setting up certificates
	I0328 01:31:03.670779    6044 provision.go:84] configureAuth start
	I0328 01:31:03.670779    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:05.907361    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:05.907361    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:05.908344    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:08.669793    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:08.669793    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:08.670703    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:10.883725    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:10.884309    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:10.884497    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:13.604385    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:13.605031    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:13.605211    6044 provision.go:143] copyHostCerts
	I0328 01:31:13.605288    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 01:31:13.605288    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 01:31:13.605288    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 01:31:13.606136    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 01:31:13.606902    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 01:31:13.607696    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 01:31:13.607696    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 01:31:13.607696    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 01:31:13.609005    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 01:31:13.609241    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 01:31:13.609241    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 01:31:13.609590    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 01:31:13.610710    6044 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-240000 san=[127.0.0.1 172.28.229.19 localhost minikube multinode-240000]
	I0328 01:31:13.916678    6044 provision.go:177] copyRemoteCerts
	I0328 01:31:13.931112    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:31:13.931295    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:16.173641    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:16.173641    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:16.173935    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:18.890759    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:18.891588    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:18.891995    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:31:18.998828    6044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0676811s)
	I0328 01:31:18.998828    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 01:31:18.998828    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0328 01:31:19.049980    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 01:31:19.049980    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:31:19.100749    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 01:31:19.101170    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:31:19.152754    6044 provision.go:87] duration metric: took 15.4818698s to configureAuth
	I0328 01:31:19.152957    6044 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:31:19.153486    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:31:19.153657    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:21.481248    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:21.481248    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:21.481399    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:24.249457    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:24.249457    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:24.256249    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:24.257214    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:24.257214    6044 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 01:31:24.387228    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 01:31:24.387228    6044 buildroot.go:70] root file system type: tmpfs
	I0328 01:31:24.387518    6044 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 01:31:24.387602    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:26.668994    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:26.669143    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:26.669143    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:29.382845    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:29.382845    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:29.390386    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:29.390557    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:29.390557    6044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 01:31:29.549421    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 01:31:29.550025    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:31.809789    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:31.810462    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:31.810462    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:34.516698    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:34.517804    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:34.523304    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:34.524045    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:34.524045    6044 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 01:31:37.114381    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 01:31:37.114381    6044 machine.go:97] duration metric: took 48.8105807s to provisionDockerMachine
	I0328 01:31:37.114381    6044 start.go:293] postStartSetup for "multinode-240000" (driver="hyperv")
	I0328 01:31:37.114381    6044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:31:37.128277    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:31:37.128277    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:39.380911    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:39.381266    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:39.381709    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:42.076488    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:42.076488    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:42.077192    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:31:42.179970    6044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0516588s)
	I0328 01:31:42.194768    6044 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:31:42.201744    6044 command_runner.go:130] > NAME=Buildroot
	I0328 01:31:42.201744    6044 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 01:31:42.201744    6044 command_runner.go:130] > ID=buildroot
	I0328 01:31:42.201744    6044 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 01:31:42.201744    6044 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 01:31:42.201848    6044 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:31:42.201959    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 01:31:42.202609    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 01:31:42.204213    6044 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 01:31:42.204213    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 01:31:42.218315    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:31:42.238227    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 01:31:42.286689    6044 start.go:296] duration metric: took 5.1722726s for postStartSetup
	I0328 01:31:42.286829    6044 fix.go:56] duration metric: took 1m34.6105776s for fixHost
	I0328 01:31:42.286921    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:44.532150    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:44.532150    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:44.532926    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:47.278447    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:47.279303    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:47.284914    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:47.285607    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:47.285607    6044 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0328 01:31:47.426555    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711589507.440502788
	
	I0328 01:31:47.426555    6044 fix.go:216] guest clock: 1711589507.440502788
	I0328 01:31:47.426555    6044 fix.go:229] Guest: 2024-03-28 01:31:47.440502788 +0000 UTC Remote: 2024-03-28 01:31:42.2868296 +0000 UTC m=+102.161341801 (delta=5.153673188s)
	I0328 01:31:47.426555    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:49.682881    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:49.682881    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:49.683884    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:52.425647    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:52.425719    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:52.431477    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:52.432491    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:52.432491    6044 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711589507
	I0328 01:31:52.585055    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 01:31:47 UTC 2024
	
	I0328 01:31:52.585119    6044 fix.go:236] clock set: Thu Mar 28 01:31:47 UTC 2024
	 (err=<nil>)
	I0328 01:31:52.585119    6044 start.go:83] releasing machines lock for "multinode-240000", held for 1m44.9089567s
	I0328 01:31:52.585343    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:54.877318    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:54.877318    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:54.878144    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:57.574828    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:57.575213    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:57.579532    6044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:31:57.579740    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:57.592077    6044 ssh_runner.go:195] Run: cat /version.json
	I0328 01:31:57.592077    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:32:02.721963    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:32:02.722061    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:32:02.722061    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:32:02.752414    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:32:02.752484    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:32:02.752832    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:32:02.999378    6044 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 01:32:02.999378    6044 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0328 01:32:02.999378    6044 ssh_runner.go:235] Completed: cat /version.json: (5.4072639s)
	I0328 01:32:02.999378    6044 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4196768s)
	I0328 01:32:03.014095    6044 ssh_runner.go:195] Run: systemctl --version
	I0328 01:32:03.024552    6044 command_runner.go:130] > systemd 252 (252)
	I0328 01:32:03.024629    6044 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0328 01:32:03.038984    6044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 01:32:03.048495    6044 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0328 01:32:03.048812    6044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:32:03.061124    6044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:32:03.095375    6044 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0328 01:32:03.095375    6044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:32:03.095375    6044 start.go:494] detecting cgroup driver to use...
	I0328 01:32:03.095375    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:32:03.135848    6044 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0328 01:32:03.149781    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 01:32:03.186891    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 01:32:03.209913    6044 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 01:32:03.222677    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 01:32:03.256516    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:32:03.290819    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 01:32:03.324261    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:32:03.358770    6044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:32:03.396649    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 01:32:03.429320    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 01:32:03.464518    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 01:32:03.500988    6044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:32:03.521856    6044 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 01:32:03.535123    6044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:32:03.567280    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:03.780537    6044 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 01:32:03.818293    6044 start.go:494] detecting cgroup driver to use...
	I0328 01:32:03.831473    6044 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 01:32:03.853864    6044 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0328 01:32:03.854614    6044 command_runner.go:130] > [Unit]
	I0328 01:32:03.854614    6044 command_runner.go:130] > Description=Docker Application Container Engine
	I0328 01:32:03.854614    6044 command_runner.go:130] > Documentation=https://docs.docker.com
	I0328 01:32:03.854614    6044 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0328 01:32:03.854614    6044 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0328 01:32:03.854614    6044 command_runner.go:130] > StartLimitBurst=3
	I0328 01:32:03.854614    6044 command_runner.go:130] > StartLimitIntervalSec=60
	I0328 01:32:03.854614    6044 command_runner.go:130] > [Service]
	I0328 01:32:03.854614    6044 command_runner.go:130] > Type=notify
	I0328 01:32:03.854614    6044 command_runner.go:130] > Restart=on-failure
	I0328 01:32:03.854614    6044 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0328 01:32:03.855705    6044 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0328 01:32:03.855747    6044 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0328 01:32:03.855844    6044 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0328 01:32:03.855844    6044 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0328 01:32:03.856011    6044 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0328 01:32:03.856069    6044 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0328 01:32:03.856069    6044 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0328 01:32:03.856069    6044 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0328 01:32:03.856125    6044 command_runner.go:130] > ExecStart=
	I0328 01:32:03.856125    6044 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0328 01:32:03.856171    6044 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0328 01:32:03.856171    6044 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0328 01:32:03.856171    6044 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitNOFILE=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitNPROC=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitCORE=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0328 01:32:03.856254    6044 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0328 01:32:03.856297    6044 command_runner.go:130] > TasksMax=infinity
	I0328 01:32:03.856297    6044 command_runner.go:130] > TimeoutStartSec=0
	I0328 01:32:03.856297    6044 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0328 01:32:03.856297    6044 command_runner.go:130] > Delegate=yes
	I0328 01:32:03.856297    6044 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0328 01:32:03.856297    6044 command_runner.go:130] > KillMode=process
	I0328 01:32:03.856297    6044 command_runner.go:130] > [Install]
	I0328 01:32:03.856359    6044 command_runner.go:130] > WantedBy=multi-user.target
	I0328 01:32:03.869208    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:32:03.911638    6044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:32:03.958364    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:32:03.998450    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:32:04.037925    6044 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 01:32:04.102633    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:32:04.127879    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:32:04.162952    6044 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0328 01:32:04.176493    6044 ssh_runner.go:195] Run: which cri-dockerd
	I0328 01:32:04.182665    6044 command_runner.go:130] > /usr/bin/cri-dockerd
	I0328 01:32:04.195266    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 01:32:04.214250    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 01:32:04.259955    6044 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 01:32:04.477140    6044 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 01:32:04.675026    6044 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 01:32:04.675299    6044 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 01:32:04.724853    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:04.935415    6044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 01:32:07.626086    6044 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6906528s)
	I0328 01:32:07.640068    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 01:32:07.679186    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:32:07.717414    6044 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 01:32:07.926863    6044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 01:32:08.138067    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:08.356866    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 01:32:08.400987    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:32:08.441537    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:08.668166    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 01:32:08.776719    6044 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 01:32:08.787947    6044 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 01:32:08.796951    6044 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0328 01:32:08.796951    6044 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 01:32:08.796951    6044 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0328 01:32:08.796951    6044 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0328 01:32:08.796951    6044 command_runner.go:130] > Access: 2024-03-28 01:32:08.707789032 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] > Modify: 2024-03-28 01:32:08.707789032 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] > Change: 2024-03-28 01:32:08.712789044 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] >  Birth: -
	I0328 01:32:08.797625    6044 start.go:562] Will wait 60s for crictl version
	I0328 01:32:08.809376    6044 ssh_runner.go:195] Run: which crictl
	I0328 01:32:08.814383    6044 command_runner.go:130] > /usr/bin/crictl
	I0328 01:32:08.827985    6044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:32:08.907335    6044 command_runner.go:130] > Version:  0.1.0
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeName:  docker
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 01:32:08.907335    6044 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 01:32:08.916322    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:32:08.948368    6044 command_runner.go:130] > 26.0.0
	I0328 01:32:08.960332    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:32:08.995021    6044 command_runner.go:130] > 26.0.0
	I0328 01:32:09.002324    6044 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 01:32:09.002324    6044 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 01:32:09.009358    6044 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 01:32:09.009358    6044 ip.go:210] interface addr: 172.28.224.1/20
	I0328 01:32:09.021885    6044 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 01:32:09.028375    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:32:09.052344    6044 kubeadm.go:877] updating cluster {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:32:09.052710    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:32:09.062677    6044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:32:09.088599    6044 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0328 01:32:09.088599    6044 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0328 01:32:09.088801    6044 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0328 01:32:09.088801    6044 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:32:09.088891    6044 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0328 01:32:09.089966    6044 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0328 01:32:09.089966    6044 docker.go:615] Images already preloaded, skipping extraction
	I0328 01:32:09.101153    6044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0328 01:32:09.128129    6044 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0328 01:32:09.128129    6044 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:32:09.128129    6044 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0328 01:32:09.128295    6044 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0328 01:32:09.128378    6044 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:32:09.128404    6044 kubeadm.go:928] updating node { 172.28.229.19 8443 v1.29.3 docker true true} ...
	I0328 01:32:09.128470    6044 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-240000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.229.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:32:09.138576    6044 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 01:32:09.177529    6044 command_runner.go:130] > cgroupfs
	I0328 01:32:09.177776    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:32:09.177776    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:32:09.177776    6044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:32:09.177858    6044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.229.19 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-240000 NodeName:multinode-240000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.229.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.229.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:32:09.177912    6044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.229.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-240000"
	  kubeletExtraArgs:
	    node-ip: 172.28.229.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:32:09.190631    6044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubeadm
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubectl
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubelet
	I0328 01:32:09.211895    6044 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:32:09.224707    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:32:09.244507    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0328 01:32:09.276515    6044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:32:09.310052    6044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0328 01:32:09.359381    6044 ssh_runner.go:195] Run: grep 172.28.229.19	control-plane.minikube.internal$ /etc/hosts
	I0328 01:32:09.365947    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.229.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:32:09.400512    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:09.613176    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:32:09.645629    6044 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000 for IP: 172.28.229.19
	I0328 01:32:09.645701    6044 certs.go:194] generating shared ca certs ...
	I0328 01:32:09.645763    6044 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.646236    6044 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 01:32:09.646952    6044 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 01:32:09.647228    6044 certs.go:256] generating profile certs ...
	I0328 01:32:09.648024    6044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.key
	I0328 01:32:09.648225    6044 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa
	I0328 01:32:09.648381    6044 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.229.19]
	I0328 01:32:09.881762    6044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa ...
	I0328 01:32:09.881762    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa: {Name:mk672bbda5084fd4479fd4bd1f8ff61e22b38a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.882343    6044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa ...
	I0328 01:32:09.883365    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa: {Name:mk17e009729aae4c06ec0571ea6c00ff1f08753a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.883605    6044 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt
	I0328 01:32:09.895434    6044 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key
	I0328 01:32:09.896420    6044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key
	I0328 01:32:09.896420    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 01:32:09.897470    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 01:32:09.897495    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 01:32:09.897804    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 01:32:09.898064    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 01:32:09.898287    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 01:32:09.898447    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 01:32:09.898579    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 01:32:09.898785    6044 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 01:32:09.900091    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 01:32:09.900801    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 01:32:09.901047    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 01:32:09.901316    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:09.901530    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 01:32:09.903022    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:32:09.955883    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 01:32:10.011738    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:32:10.067517    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 01:32:10.128505    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:32:10.176844    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:32:10.229773    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:32:10.285499    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:32:10.342232    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 01:32:10.394173    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:32:10.448053    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 01:32:10.496984    6044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:32:10.546538    6044 ssh_runner.go:195] Run: openssl version
	I0328 01:32:10.559981    6044 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 01:32:10.574581    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 01:32:10.608039    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.615597    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.615654    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.628386    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.637673    6044 command_runner.go:130] > 51391683
	I0328 01:32:10.649972    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 01:32:10.682992    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 01:32:10.717560    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.725835    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.725835    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.739278    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.748756    6044 command_runner.go:130] > 3ec20f2e
	I0328 01:32:10.761511    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:32:10.794098    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:32:10.829212    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.837233    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.838335    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.850220    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.861221    6044 command_runner.go:130] > b5213941
	I0328 01:32:10.873258    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:32:10.910968    6044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:32:10.919865    6044 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:32:10.919865    6044 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0328 01:32:10.919950    6044 command_runner.go:130] > Device: 8,1	Inode: 4196142     Links: 1
	I0328 01:32:10.919974    6044 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 01:32:10.919974    6044 command_runner.go:130] > Access: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.919974    6044 command_runner.go:130] > Modify: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.920036    6044 command_runner.go:130] > Change: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.920036    6044 command_runner.go:130] >  Birth: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.936931    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:32:10.949507    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:10.965306    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:32:10.978035    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:10.993060    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:32:11.004113    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.017702    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:32:11.028884    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.043422    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:32:11.054378    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.067575    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:32:11.083623    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.084158    6044 kubeadm.go:391] StartCluster: {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:32:11.095332    6044 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 01:32:11.133216    6044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 01:32:11.155454    6044 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0328 01:32:11.155510    6044 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0328 01:32:11.155579    6044 command_runner.go:130] > /var/lib/minikube/etcd:
	I0328 01:32:11.155579    6044 command_runner.go:130] > member
	W0328 01:32:11.155644    6044 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:32:11.155749    6044 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:32:11.155792    6044 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:32:11.169709    6044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:32:11.189381    6044 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:32:11.190796    6044 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-240000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:11.190963    6044 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-240000" cluster setting kubeconfig missing "multinode-240000" context setting]
	I0328 01:32:11.192114    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:11.205920    6044 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:11.207123    6044 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.229.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:32:11.208671    6044 cert_rotation.go:137] Starting client certificate rotation controller
	I0328 01:32:11.223482    6044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:32:11.245634    6044 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0328 01:32:11.245725    6044 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:32:11.245725    6044 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0328 01:32:11.245725    6044 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0328 01:32:11.245725    6044 command_runner.go:130] >  kind: InitConfiguration
	I0328 01:32:11.245725    6044 command_runner.go:130] >  localAPIEndpoint:
	I0328 01:32:11.245725    6044 command_runner.go:130] > -  advertiseAddress: 172.28.227.122
	I0328 01:32:11.245807    6044 command_runner.go:130] > +  advertiseAddress: 172.28.229.19
	I0328 01:32:11.245807    6044 command_runner.go:130] >    bindPort: 8443
	I0328 01:32:11.245851    6044 command_runner.go:130] >  bootstrapTokens:
	I0328 01:32:11.245851    6044 command_runner.go:130] >    - groups:
	I0328 01:32:11.245851    6044 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0328 01:32:11.245851    6044 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0328 01:32:11.245851    6044 command_runner.go:130] >    name: "multinode-240000"
	I0328 01:32:11.245851    6044 command_runner.go:130] >    kubeletExtraArgs:
	I0328 01:32:11.245851    6044 command_runner.go:130] > -    node-ip: 172.28.227.122
	I0328 01:32:11.245851    6044 command_runner.go:130] > +    node-ip: 172.28.229.19
	I0328 01:32:11.245851    6044 command_runner.go:130] >    taints: []
	I0328 01:32:11.245851    6044 command_runner.go:130] >  ---
	I0328 01:32:11.245851    6044 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0328 01:32:11.245851    6044 command_runner.go:130] >  kind: ClusterConfiguration
	I0328 01:32:11.245851    6044 command_runner.go:130] >  apiServer:
	I0328 01:32:11.245851    6044 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.227.122"]
	I0328 01:32:11.245851    6044 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	I0328 01:32:11.245851    6044 command_runner.go:130] >    extraArgs:
	I0328 01:32:11.245851    6044 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0328 01:32:11.245851    6044 command_runner.go:130] >  controllerManager:
	I0328 01:32:11.245851    6044 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.227.122
	+  advertiseAddress: 172.28.229.19
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-240000"
	   kubeletExtraArgs:
	-    node-ip: 172.28.227.122
	+    node-ip: 172.28.229.19
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.227.122"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0328 01:32:11.245851    6044 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:32:11.255514    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 01:32:11.284915    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:32:11.285010    6044 command_runner.go:130] > d02996b2d57b
	I0328 01:32:11.285010    6044 command_runner.go:130] > 28426f4e9df5
	I0328 01:32:11.285010    6044 command_runner.go:130] > 6b6f67390b07
	I0328 01:32:11.285010    6044 command_runner.go:130] > dc9808261b21
	I0328 01:32:11.285010    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:32:11.285055    6044 command_runner.go:130] > 5d9ed3a20e88
	I0328 01:32:11.285055    6044 command_runner.go:130] > 6ae82cd0a848
	I0328 01:32:11.285055    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:32:11.285055    6044 command_runner.go:130] > 7061eab02790
	I0328 01:32:11.285055    6044 command_runner.go:130] > a01212226d03
	I0328 01:32:11.285055    6044 command_runner.go:130] > 66f15076d344
	I0328 01:32:11.285055    6044 command_runner.go:130] > 763932cfdf0b
	I0328 01:32:11.285102    6044 command_runner.go:130] > 7415d077c6f8
	I0328 01:32:11.285102    6044 command_runner.go:130] > ec77663c174f
	I0328 01:32:11.285102    6044 command_runner.go:130] > 20ff2ecb3a6d
	I0328 01:32:11.285143    6044 docker.go:483] Stopping containers: [29e516c918ef d02996b2d57b 28426f4e9df5 6b6f67390b07 dc9808261b21 bb0b3c542264 5d9ed3a20e88 6ae82cd0a848 1aa05268773e 7061eab02790 a01212226d03 66f15076d344 763932cfdf0b 7415d077c6f8 ec77663c174f 20ff2ecb3a6d]
	I0328 01:32:11.295385    6044 ssh_runner.go:195] Run: docker stop 29e516c918ef d02996b2d57b 28426f4e9df5 6b6f67390b07 dc9808261b21 bb0b3c542264 5d9ed3a20e88 6ae82cd0a848 1aa05268773e 7061eab02790 a01212226d03 66f15076d344 763932cfdf0b 7415d077c6f8 ec77663c174f 20ff2ecb3a6d
	I0328 01:32:11.327545    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:32:11.327545    6044 command_runner.go:130] > d02996b2d57b
	I0328 01:32:11.327545    6044 command_runner.go:130] > 28426f4e9df5
	I0328 01:32:11.327545    6044 command_runner.go:130] > 6b6f67390b07
	I0328 01:32:11.327545    6044 command_runner.go:130] > dc9808261b21
	I0328 01:32:11.327545    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:32:11.327545    6044 command_runner.go:130] > 5d9ed3a20e88
	I0328 01:32:11.327545    6044 command_runner.go:130] > 6ae82cd0a848
	I0328 01:32:11.327545    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:32:11.327545    6044 command_runner.go:130] > 7061eab02790
	I0328 01:32:11.328529    6044 command_runner.go:130] > a01212226d03
	I0328 01:32:11.328529    6044 command_runner.go:130] > 66f15076d344
	I0328 01:32:11.328529    6044 command_runner.go:130] > 763932cfdf0b
	I0328 01:32:11.328529    6044 command_runner.go:130] > 7415d077c6f8
	I0328 01:32:11.328581    6044 command_runner.go:130] > ec77663c174f
	I0328 01:32:11.328581    6044 command_runner.go:130] > 20ff2ecb3a6d
	I0328 01:32:11.342451    6044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:32:11.392958    6044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:32:11.413368    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:32:11.413769    6044 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:32:11.413769    6044 kubeadm.go:156] found existing configuration files:
	
	I0328 01:32:11.426984    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:32:11.445904    6044 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:32:11.446787    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:32:11.458764    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:32:11.493311    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:32:11.510433    6044 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:32:11.510905    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:32:11.524531    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:32:11.556540    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:32:11.575511    6044 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:32:11.575511    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:32:11.588024    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:32:11.620473    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:32:11.639916    6044 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:32:11.640222    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:32:11.654943    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:32:11.690184    6044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:32:11.716592    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using the existing "sa" key
	I0328 01:32:12.005834    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.096426    6044 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:32:13.096620    6044 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:32:13.096816    6044 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:32:13.096873    6044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090975s)
	I0328 01:32:13.096924    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0328 01:32:13.429850    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:32:13.548135    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.671844    6044 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:32:13.672006    6044 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:32:13.684817    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:14.198663    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:14.688002    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.196828    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.683176    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.712815    6044 command_runner.go:130] > 2032
	I0328 01:32:15.712815    6044 api_server.go:72] duration metric: took 2.040873s to wait for apiserver process to appear ...
	I0328 01:32:15.712912    6044 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:32:15.712969    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.325528    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:32:19.325627    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:32:19.325627    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.386465    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:32:19.386465    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:32:19.719238    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.731650    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:19.731650    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:20.227123    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:20.235291    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:20.235397    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:20.721486    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:20.740353    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:20.740450    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:21.216756    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:21.228799    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 200:
	ok
	I0328 01:32:21.229301    6044 round_trippers.go:463] GET https://172.28.229.19:8443/version
	I0328 01:32:21.229301    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:21.229301    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:21.229301    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:21.248951    6044 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0328 01:32:21.248951    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:21.248951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:21.248951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Content-Length: 263
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:21 GMT
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Audit-Id: 72f12dac-ee55-42f0-9a97-040c7c2de65f
	I0328 01:32:21.248951    6044 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0328 01:32:21.248951    6044 api_server.go:141] control plane version: v1.29.3
	I0328 01:32:21.248951    6044 api_server.go:131] duration metric: took 5.5360011s to wait for apiserver health ...
	I0328 01:32:21.248951    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:32:21.248951    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:32:21.251958    6044 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 01:32:21.266957    6044 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 01:32:21.275833    6044 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0328 01:32:21.275906    6044 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0328 01:32:21.275962    6044 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0328 01:32:21.275962    6044 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 01:32:21.275998    6044 command_runner.go:130] > Access: 2024-03-28 01:30:40.390507300 +0000
	I0328 01:32:21.276021    6044 command_runner.go:130] > Modify: 2024-03-27 22:52:09.000000000 +0000
	I0328 01:32:21.276021    6044 command_runner.go:130] > Change: 2024-03-28 01:30:30.450000000 +0000
	I0328 01:32:21.276042    6044 command_runner.go:130] >  Birth: -
	I0328 01:32:21.277142    6044 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 01:32:21.277211    6044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 01:32:21.343342    6044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 01:32:22.787502    6044 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > daemonset.apps/kindnet configured
	I0328 01:32:22.788171    6044 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4448194s)
	I0328 01:32:22.788283    6044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:32:22.788283    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:22.788283    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:22.788283    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:22.788283    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:22.795882    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:22.795882    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:22.795882    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:22.795882    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:22 GMT
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Audit-Id: f306f21b-0c65-49da-bcda-4f2fd057ce7d
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:22.797867    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1942"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87144 chars]
	I0328 01:32:22.803872    6044 system_pods.go:59] 12 kube-system pods found
	I0328 01:32:22.803872    6044 system_pods.go:61] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:32:22.804853    6044 system_pods.go:61] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:32:22.804853    6044 system_pods.go:61] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:32:22.804853    6044 system_pods.go:74] duration metric: took 16.5698ms to wait for pod list to return data ...
	I0328 01:32:22.804853    6044 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:32:22.804853    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes
	I0328 01:32:22.804853    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:22.804853    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:22.804853    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:22.811154    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:22.811154    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:22.811154    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:22.811154    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:22 GMT
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Audit-Id: 4dc287fb-d2c0-4dd5-9300-dae5b03bdc7f
	I0328 01:32:22.811154    6044 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1942"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 15651 chars]
	I0328 01:32:22.812753    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812811    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812872    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812872    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:105] duration metric: took 8.0186ms to run NodePressure ...
	I0328 01:32:22.812933    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:23.362349    6044 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0328 01:32:23.362349    6044 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0328 01:32:23.362349    6044 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:32:23.362349    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0328 01:32:23.362349    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.362349    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.362349    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.369364    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:23.369364    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Audit-Id: 22598923-e104-4294-8af1-8c8c63fb54cf
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.370026    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.370026    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.371511    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1946"},"items":[{"metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1869","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0328 01:32:23.372774    6044 kubeadm.go:733] kubelet initialised
	I0328 01:32:23.372774    6044 kubeadm.go:734] duration metric: took 10.4249ms waiting for restarted kubelet to initialise ...
	I0328 01:32:23.372774    6044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:23.373324    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:23.373377    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.373407    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.373407    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.391616    6044 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0328 01:32:23.392094    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.392094    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Audit-Id: 7659c847-0240-4180-8d5e-34ad99a7e7c6
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.392094    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.393994    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1946"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87144 chars]
	I0328 01:32:23.398342    6044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.399366    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:23.399366    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.399366    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.399366    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.403360    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.403360    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Audit-Id: c126a05d-c80f-4243-8d68-38114f1a4c62
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.403360    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.403779    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.403779    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.403987    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:23.404623    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.404623    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.404699    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.404699    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.410201    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:23.410201    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.410201    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.410201    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Audit-Id: 5741f29c-842d-4c45-aa55-c9106415f8e2
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.411080    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.411692    6044 pod_ready.go:97] node "multinode-240000" hosting pod "coredns-76f75df574-776ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.411817    6044 pod_ready.go:81] duration metric: took 13.4745ms for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.411817    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "coredns-76f75df574-776ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.411877    6044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.412015    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:32:23.412015    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.412015    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.412015    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.415428    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.415428    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Audit-Id: 89857769-163a-4bf5-ba36-3d8d76ff7ca3
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.415428    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.415428    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.415428    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1869","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0328 01:32:23.416406    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.416406    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.416406    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.416406    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.419415    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.419415    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.419415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Audit-Id: ab1e3d0f-d824-4fb9-855a-6f89de629d07
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.419812    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.420084    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.420574    6044 pod_ready.go:97] node "multinode-240000" hosting pod "etcd-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.420632    6044 pod_ready.go:81] duration metric: took 8.7546ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.420632    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "etcd-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.420715    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.420826    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:32:23.420826    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.420826    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.420826    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.424565    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.424565    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Audit-Id: f1f4b044-1215-4c44-b46f-deee6a9cf7dc
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.424565    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.424565    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.424565    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"8b9b4cf7-40b0-4a3e-96ca-28c934f9789a","resourceVersion":"1870","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.229.19:8443","kubernetes.io/config.hash":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.mirror":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.seen":"2024-03-28T01:32:13.677615805Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0328 01:32:23.425708    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.425708    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.425708    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.425708    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.429466    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.429466    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.429466    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Audit-Id: f20d50ef-3eb7-46d5-8007-8d3851472675
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.429466    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.430294    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.430294    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-apiserver-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.430294    6044 pod_ready.go:81] duration metric: took 9.5785ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.430294    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-apiserver-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.430294    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.430952    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:32:23.430993    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.430993    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.431029    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.435608    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.435608    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.435608    6044 round_trippers.go:580]     Audit-Id: 220669ea-17ea-4a31-822f-e000b9198762
	I0328 01:32:23.435608    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.435967    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.435967    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.435967    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.435967    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.436509    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"1867","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0328 01:32:23.437400    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.437469    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.437469    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.437469    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.440611    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.440611    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Audit-Id: 4e3122da-e4c0-4a49-b78c-b4945d8cd2db
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.440611    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.440611    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.441417    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.442089    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-controller-manager-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.442089    6044 pod_ready.go:81] duration metric: took 11.7947ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.442089    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-controller-manager-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.442089    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.562617    6044 request.go:629] Waited for 120.4169ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:32:23.562830    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:32:23.562926    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.562926    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.562926    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.567221    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.567221    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.567221    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.567293    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.567293    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Audit-Id: 1698ab54-abd5-401e-9b74-d35987316474
	I0328 01:32:23.567513    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"1926","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0328 01:32:23.766958    6044 request.go:629] Waited for 198.4053ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.767191    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.767191    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.767191    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.767191    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.771975    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.771975    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Audit-Id: daef0079-aa85-4f4a-bfa8-973a4cd67867
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.771975    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.771975    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.771975    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.773542    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-proxy-47rqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.773606    6044 pod_ready.go:81] duration metric: took 331.5157ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.773606    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-proxy-47rqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.773606    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.974151    6044 request.go:629] Waited for 200.3791ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:32:23.974431    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:32:23.974687    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.974687    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.974687    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.978771    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.978771    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Audit-Id: 1b6a3f8f-3d3a-4282-ae29-6a076d976278
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.978771    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.978771    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.978771    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55rch","generateName":"kube-proxy-","namespace":"kube-system","uid":"a96f841b-3e8f-42c1-be63-03914c0b90e8","resourceVersion":"1831","creationTimestamp":"2024-03-28T01:15:58Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:15:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:32:24.164582    6044 request.go:629] Waited for 184.5948ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:32:24.164798    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:32:24.164798    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.164798    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.164798    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.169769    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.169769    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.169769    6044 round_trippers.go:580]     Audit-Id: 347d5143-d72d-4f28-b657-4a4fea1a4a3a
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.169839    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.169839    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.170093    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m03","uid":"dbbc38c1-7871-4a48-98eb-4fd00b43bc22","resourceVersion":"1842","creationTimestamp":"2024-03-28T01:27:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_27_31_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:27:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4407 chars]
	I0328 01:32:24.170603    6044 pod_ready.go:97] node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:32:24.170660    6044 pod_ready.go:81] duration metric: took 397.0507ms for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:24.170715    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:32:24.170715    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.373285    6044 request.go:629] Waited for 202.1221ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:32:24.373285    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:32:24.373285    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.373285    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.373285    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.377942    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.378224    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Audit-Id: c94e4e5a-1e6d-4fa9-9d80-72b2f2c49cdf
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.378224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.378224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.378754    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t88gz","generateName":"kube-proxy-","namespace":"kube-system","uid":"695603ac-ab24-4206-9728-342b2af018e4","resourceVersion":"650","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0328 01:32:24.578424    6044 request.go:629] Waited for 198.6954ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:32:24.578547    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:32:24.578547    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.578547    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.578547    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.582888    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.582888    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.582888    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.582888    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.583181    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.583181    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.583181    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.583181    6044 round_trippers.go:580]     Audit-Id: f21c33af-496a-4d86-97ab-574e1116bee1
	I0328 01:32:24.585884    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"1676","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3834 chars]
	I0328 01:32:24.585884    6044 pod_ready.go:92] pod "kube-proxy-t88gz" in "kube-system" namespace has status "Ready":"True"
	I0328 01:32:24.585884    6044 pod_ready.go:81] duration metric: took 415.1663ms for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.585884    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.765768    6044 request.go:629] Waited for 179.1164ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:32:24.766039    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:32:24.766039    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.766039    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.766039    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.771490    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:24.771490    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.771564    6044 round_trippers.go:580]     Audit-Id: 2c7100fb-9f35-4070-99ce-5b674459ceba
	I0328 01:32:24.771564    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.771721    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.771721    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.771721    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.771721    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.771923    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"1868","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0328 01:32:24.968690    6044 request.go:629] Waited for 195.8642ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:24.968690    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:24.968690    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.968690    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.968690    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.973620    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.973620    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.973620    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Audit-Id: 96942bd8-087c-42d8-ba5c-44b9fe634e1d
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.973734    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.973779    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:24.974540    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-scheduler-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:24.974611    6044 pod_ready.go:81] duration metric: took 388.7245ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:24.974611    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-scheduler-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:24.974611    6044 pod_ready.go:38] duration metric: took 1.6018265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:24.974730    6044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:32:24.999010    6044 command_runner.go:130] > -16
	I0328 01:32:24.999010    6044 ops.go:34] apiserver oom_adj: -16
	I0328 01:32:24.999010    6044 kubeadm.go:591] duration metric: took 13.8430667s to restartPrimaryControlPlane
	I0328 01:32:24.999010    6044 kubeadm.go:393] duration metric: took 13.9148017s to StartCluster
	I0328 01:32:24.999010    6044 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:24.999702    6044 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:25.001404    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:25.003180    6044 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 01:32:25.008497    6044 out.go:177] * Verifying Kubernetes components...
	I0328 01:32:25.003376    6044 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:32:25.003561    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:32:25.013871    6044 out.go:177] * Enabled addons: 
	I0328 01:32:25.014678    6044 addons.go:505] duration metric: took 11.4981ms for enable addons: enabled=[]
	I0328 01:32:25.024717    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:25.337072    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:32:25.367810    6044 node_ready.go:35] waiting up to 6m0s for node "multinode-240000" to be "Ready" ...
	I0328 01:32:25.368966    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:25.369033    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:25.369056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:25.369056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:25.372656    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:25.372656    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:25.372656    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:25.372656    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:25 GMT
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Audit-Id: 7990af29-714a-474b-b648-fad0541389d0
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:25.373441    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:25.373760    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:25.873148    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:25.873205    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:25.873205    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:25.873205    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:25.877548    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:25.877548    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:25.878124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:25 GMT
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Audit-Id: 917ae63e-5384-4274-9f85-8beb8604f997
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:25.878124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:25.878524    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:26.376792    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:26.376792    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:26.376792    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:26.376792    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:26.383478    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:26.383621    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Audit-Id: 3f4d1b84-8eee-41fd-bb59-51b89354ca3f
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:26.383621    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:26.383621    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:26 GMT
	I0328 01:32:26.383799    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:26.877594    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:26.877594    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:26.877594    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:26.877594    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:26.884075    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:26.884075    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Audit-Id: 7ec51b15-bb24-4b9a-8d31-24c4df0b9d6c
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:26.884451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:26.884451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:26 GMT
	I0328 01:32:26.884556    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:27.379989    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:27.380062    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:27.380062    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:27.380062    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:27.383811    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:27.383834    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:27.383896    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:27.384054    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:27.384054    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:27 GMT
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Audit-Id: 14087351-d4f7-40dd-9294-41ece6e36270
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:27.384212    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:27.384898    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:27.871883    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:27.871883    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:27.872030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:27.872030    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:27.876137    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:27.876945    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Audit-Id: 60b97c85-27c4-4698-bfc7-f0f6c9d85811
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:27.876945    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:27.876945    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:27 GMT
	I0328 01:32:27.877030    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:28.375532    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:28.375532    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:28.375532    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:28.375532    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:28.382107    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:28.382107    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:28.382107    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:28.382107    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:28 GMT
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Audit-Id: 05e0d4d7-c269-47a6-89bb-bffa4d2770a9
	I0328 01:32:28.382107    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:28.878151    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:28.878151    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:28.878151    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:28.878151    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:28.881738    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:28.881738    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:28.881738    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:28.881738    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:28 GMT
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Audit-Id: bba28862-d523-4da5-bbf4-048da4b0ffbe
	I0328 01:32:28.883058    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.369320    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:29.369634    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:29.369634    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:29.369812    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:29.374327    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:29.374327    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:29.374327    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:29.374327    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:29 GMT
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Audit-Id: 678b863f-3167-4e52-806b-39bd3d866bb2
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:29.375072    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.875100    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:29.875100    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:29.875100    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:29.875100    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:29.879745    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:29.879745    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:29.879745    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:29 GMT
	I0328 01:32:29.879745    6044 round_trippers.go:580]     Audit-Id: 9c4420ec-1bf5-4771-9b6d-6bbe10c36b2a
	I0328 01:32:29.879951    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:29.879951    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:29.879951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:29.879951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:29.880203    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.881160    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:30.376975    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:30.376975    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:30.376975    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:30.376975    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:30.381566    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:30.381566    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:30 GMT
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Audit-Id: 5727d147-60f3-4b20-b046-9b4e66307512
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:30.381672    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:30.381672    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:30.381919    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:30.877103    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:30.877103    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:30.877103    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:30.877103    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:30.885079    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:30.885079    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:30.885079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:30 GMT
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Audit-Id: b4cca81d-9013-4ac9-becd-44ff47d880e1
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:30.885079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:30.885079    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.377228    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:31.377285    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:31.377285    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:31.377285    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:31.380759    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:31.380759    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:31.380759    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:31.380759    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:31 GMT
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Audit-Id: 09ed7261-20fc-40dd-b579-56864756df7c
	I0328 01:32:31.380759    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.882024    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:31.882024    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:31.882024    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:31.882111    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:31.887412    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:31.887479    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Audit-Id: 7b8b9299-426a-45a6-8a23-0169ad3abc29
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:31.887662    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:31.887662    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:31 GMT
	I0328 01:32:31.887662    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.888339    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:32.369274    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:32.369274    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:32.369537    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:32.369537    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:32.374876    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:32.374876    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:32.374941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:32.374941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:32 GMT
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Audit-Id: d4a7a6df-b3df-4c3c-ba61-ef7aef928792
	I0328 01:32:32.375218    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:32.876487    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:32.876487    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:32.876487    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:32.876487    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:32.880072    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:32.880072    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:32.880072    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:32.880072    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:32 GMT
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Audit-Id: 774a2c59-4fd8-45d8-bdb6-7a187b7991b4
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:32.880072    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:33.380886    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:33.380886    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:33.380886    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:33.380886    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:33.385366    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:33.385366    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:33.385366    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:33.385825    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:33.385825    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:33 GMT
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Audit-Id: aea51daa-93a9-429f-bcac-cea2d1e746fe
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:33.386091    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:33.868031    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:33.868031    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:33.868031    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:33.868031    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:33.871095    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:33.871589    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:33.871589    6044 round_trippers.go:580]     Audit-Id: 993e6d76-750a-466b-8755-ee2d377898d4
	I0328 01:32:33.871589    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:33.871767    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:33.871767    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:33.871888    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:33.871972    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:33 GMT
	I0328 01:32:33.872388    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:34.375377    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:34.375377    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:34.375377    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:34.375377    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:34.379894    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:34.379894    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:34.379894    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:34 GMT
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Audit-Id: c3809b19-bca1-4284-8c1f-ac9dffb986cc
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:34.379894    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:34.380332    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:34.380332    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:34.879948    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:34.880090    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:34.880090    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:34.880090    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:34.884713    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:34.884922    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:34.884922    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:34 GMT
	I0328 01:32:34.884922    6044 round_trippers.go:580]     Audit-Id: e9c1b711-34ad-4e05-9cb7-dfcebc1ee3f7
	I0328 01:32:34.885005    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:34.885005    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:34.885005    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:34.885005    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:34.885145    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:35.382825    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:35.382825    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:35.382825    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:35.382825    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:35.387405    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:35.387405    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:35.387405    6044 round_trippers.go:580]     Audit-Id: d7064458-c5a7-48f4-9876-3d4121f8b348
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:35.387488    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:35.387488    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:35 GMT
	I0328 01:32:35.387652    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:35.871622    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:35.871622    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:35.871622    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:35.871622    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:35.882622    6044 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:32:35.883132    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:35 GMT
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Audit-Id: b5139b39-185b-4b88-99c7-e36383c18949
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:35.883173    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:35.883173    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:35.883173    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:35.883629    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.372774    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:36.373040    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:36.373040    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:36.373040    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:36.377348    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:36.377348    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:36.377348    6044 round_trippers.go:580]     Audit-Id: 8a9fddb2-92a2-4603-afea-98d373e119d2
	I0328 01:32:36.377348    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:36.377584    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:36.377584    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:36.377584    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:36.377584    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:36 GMT
	I0328 01:32:36.378042    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.876519    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:36.876587    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:36.876587    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:36.876587    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:36.881370    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:36.881370    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Audit-Id: 8e697d4e-b706-4e07-b872-12a2b5b6b694
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:36.881998    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:36.881998    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:36 GMT
	I0328 01:32:36.882474    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.883064    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:37.378313    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:37.378313    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:37.378313    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:37.378313    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:37.382162    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:37.382209    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:37.382257    6044 round_trippers.go:580]     Audit-Id: 76b9028f-f6bb-44bb-b0ce-f48f4f692c58
	I0328 01:32:37.382257    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:37.382300    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:37.382300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:37.382300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:37.382341    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:37 GMT
	I0328 01:32:37.382341    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:37.868051    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:37.868051    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:37.868051    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:37.868051    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:37.872686    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:37.872686    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:37.872686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:37.872686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:37 GMT
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Audit-Id: 1953861f-42d0-409b-89ec-3afc3e2977fa
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:37.873206    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.373283    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:38.373283    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:38.373283    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:38.373283    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:38.376594    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:38.376594    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:38.376594    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:38.376594    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:38 GMT
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Audit-Id: 42d490b1-e665-4663-99af-640412839bc9
	I0328 01:32:38.377098    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.878200    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:38.878638    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:38.878638    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:38.878638    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:38.883329    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:38.883329    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:38 GMT
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Audit-Id: 98566b33-4d46-48a5-94ab-61953e9734ec
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:38.883329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:38.883329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:38.883740    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.884391    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:39.383533    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:39.383671    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:39.383671    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:39.383671    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:39.387051    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:39.388046    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:39 GMT
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Audit-Id: 571fe9c1-33df-44f6-8339-c2da3ccb3632
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:39.388101    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:39.388101    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:39.388402    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:39.870558    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:39.870620    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:39.870678    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:39.870678    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:39.874490    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:39.874490    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:39.874490    6044 round_trippers.go:580]     Audit-Id: fffd0e0d-0a60-48e3-9a07-9e1aea3bf9e3
	I0328 01:32:39.874490    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:39.874708    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:39.874708    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:39.874708    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:39.874708    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:39 GMT
	I0328 01:32:39.874772    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:40.371454    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:40.371522    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:40.371522    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:40.371522    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:40.376314    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:40.376878    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Audit-Id: 18e36c87-19fd-49ab-b28c-3bea3aa72554
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:40.376878    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:40.376878    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:40 GMT
	I0328 01:32:40.377116    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:40.873252    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:40.873318    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:40.873318    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:40.873318    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:40.877561    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:40.877561    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Audit-Id: 0a0cdc1a-77fe-431c-bafe-0ec33478c4f7
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:40.877639    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:40.877639    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:40 GMT
	I0328 01:32:40.878032    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:41.378708    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:41.378780    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:41.378780    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:41.378780    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:41.382255    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:41.383176    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Audit-Id: 0e606d26-34d0-4b0e-9cca-e05fbfe8de63
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:41.383176    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:41.383176    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:41 GMT
	I0328 01:32:41.383353    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:41.383981    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:41.884237    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:41.884294    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:41.884294    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:41.884294    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:41.888886    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:41.889104    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:41.889104    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:41.889104    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:41 GMT
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Audit-Id: f12be6b1-c765-41c9-9ceb-c500995e76fa
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:41.889104    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:42.382467    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:42.382467    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:42.382467    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:42.382467    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:42.385434    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:42.386424    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Audit-Id: 5ffade95-60f5-4b41-85c9-e876d8b7089c
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:42.386482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:42.386482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:42 GMT
	I0328 01:32:42.386793    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:42.869430    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:42.869691    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:42.869691    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:42.869691    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:42.873148    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:42.873745    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:42.873745    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:42.873745    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:42.873745    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:42.873745    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:42 GMT
	I0328 01:32:42.873848    6044 round_trippers.go:580]     Audit-Id: d285592f-f933-4d0c-a103-14d83fe62b8c
	I0328 01:32:42.873848    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:42.874137    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.377922    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:43.378030    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:43.378030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:43.378102    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:43.382005    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:43.382124    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Audit-Id: df9b6025-5242-4520-933a-db4697a21b99
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:43.382124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:43.382124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:43 GMT
	I0328 01:32:43.382124    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.877638    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:43.877748    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:43.877748    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:43.877748    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:43.881187    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:43.882174    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:43.882174    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:43.882174    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:43 GMT
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Audit-Id: 04a2bd53-b025-429e-b1f8-242bc9f4680d
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:43.882803    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.883383    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:44.383219    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:44.383219    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:44.383219    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:44.383219    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:44.386789    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:44.387069    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Audit-Id: 10372e68-b64b-46b9-a463-eefda2b18076
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:44.387069    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:44.387138    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:44.387138    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:44 GMT
	I0328 01:32:44.387363    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:44.870705    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:44.870834    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:44.870834    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:44.870894    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:44.874292    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:44.874292    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:44.874292    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:44.874292    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:44 GMT
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Audit-Id: a995bb09-6e51-4e67-bc9e-ff3d7e396912
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:44.874659    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:45.378679    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:45.378679    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:45.378914    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:45.378914    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:45.389443    6044 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 01:32:45.389443    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:45.389443    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:45.389443    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:45.389443    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:45 GMT
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Audit-Id: df2db671-e9c5-43f4-8fae-58cee154b3fe
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:45.390238    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:45.868672    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:45.868751    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:45.868751    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:45.868751    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:45.873475    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:45.873475    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:45 GMT
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Audit-Id: e472d716-3677-4946-8e44-1747db6d252a
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:45.873475    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:45.873475    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:45.873475    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:46.373783    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:46.373860    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:46.373860    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:46.373912    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:46.378308    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:46.378308    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:46 GMT
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Audit-Id: 4d6d6f86-9b78-410c-9e62-342655933c52
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:46.378308    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:46.378308    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:46.378661    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:46.379123    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:46.875338    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:46.875338    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:46.875338    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:46.875338    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:46.879919    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:46.879919    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Audit-Id: c1ce1192-bbcf-4e95-a7de-1c4e87a323df
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:46.879919    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:46.879919    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:46.880089    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:46 GMT
	I0328 01:32:46.880237    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:47.379030    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:47.379030    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:47.379030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:47.379030    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:47.383231    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:47.383319    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:47.383319    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:47 GMT
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Audit-Id: 2279d9f0-92ea-4ff3-b350-b8882bce703a
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:47.383319    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:47.383694    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:47.878074    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:47.878074    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:47.878074    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:47.878365    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:47.881658    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:47.881658    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:47.881658    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:47.881658    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:47 GMT
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Audit-Id: 6395c584-704f-4004-a39f-4bc22d258ffa
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:47.882461    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:47.882790    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:48.381990    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:48.381990    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:48.381990    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:48.381990    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:48.386494    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:48.386494    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Audit-Id: d6b0ec0a-334c-4b41-a38a-080f47b44eb8
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:48.386494    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:48.386494    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:48 GMT
	I0328 01:32:48.387199    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:48.387730    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:48.872803    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:48.872803    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:48.872803    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:48.872803    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:48.877041    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:48.877041    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:48.877041    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:48 GMT
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Audit-Id: c5c12cd2-83af-43fd-80e0-6f0c4e9d9899
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:48.877261    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:48.877261    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:48.877365    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:49.375326    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:49.375386    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:49.375386    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:49.375452    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:49.383234    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:49.383234    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:49 GMT
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Audit-Id: e00020e8-99ea-467c-a342-259bfd21722f
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:49.383234    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:49.383234    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:49.383590    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:49.876823    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:49.876823    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:49.876823    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:49.876823    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:49.883192    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:49.883381    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:49.883381    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:49.883381    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:49 GMT
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Audit-Id: 4ec65566-e313-4230-abf5-430325415f15
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:49.884220    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.369462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:50.369462    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:50.369462    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:50.369462    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:50.373894    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:50.373894    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:50.374489    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:50.374489    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:50 GMT
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Audit-Id: d8731f61-da51-4051-90ba-561479eb7934
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:50.374788    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.879092    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:50.879185    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:50.879185    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:50.879185    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:50.885712    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:50.885712    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:50.885712    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:50.885712    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:50 GMT
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Audit-Id: 266d4fe9-f340-4dac-90e1-346b7a3a500b
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:50.886092    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.886834    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:51.380606    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:51.380606    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:51.380606    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:51.380872    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:51.390533    6044 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 01:32:51.390533    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Audit-Id: ea0301aa-f324-4b13-b581-aa01ca97daf2
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:51.390533    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:51.390533    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:51 GMT
	I0328 01:32:51.390533    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:51.868835    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:51.868835    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:51.868835    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:51.868835    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:51.874115    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:51.874199    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:51.874199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:51 GMT
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Audit-Id: a985b2be-e4a5-4a37-aea5-feec7817ef98
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:51.874199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:51.874199    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:52.370755    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:52.370755    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:52.370755    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:52.370755    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:52.375478    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:52.376178    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:52.376178    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:52.376178    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:52 GMT
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Audit-Id: fad1cb7f-0425-4a93-819a-1945a6d6b3c2
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:52.376527    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:52.876133    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:52.876209    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:52.876209    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:52.876209    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:52.884190    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:52.885123    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:52.885123    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:52 GMT
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Audit-Id: 944b86ee-6fe4-429a-a4b9-164efd33b768
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:52.885123    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:52.886158    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:53.377575    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:53.377636    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:53.377636    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:53.377636    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:53.381680    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:53.381680    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:53.382093    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:53.382093    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:53 GMT
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Audit-Id: 35749f5f-2656-424c-9e85-54e3aeca7405
	I0328 01:32:53.382363    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:53.383136    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:53.881296    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:53.881423    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:53.881423    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:53.881423    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:53.892464    6044 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:32:53.892588    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:53.892588    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:53 GMT
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Audit-Id: f2158fef-8ec5-43ee-b6cd-fd7efe401602
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:53.892588    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:53.892588    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:54.370866    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.370866    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.370866    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.370866    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.377336    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:54.377336    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.377452    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.377452    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Audit-Id: e8a80d99-932f-4b65-aa10-5da7a8d297e5
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.377658    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:54.378365    6044 node_ready.go:49] node "multinode-240000" has status "Ready":"True"
	I0328 01:32:54.378483    6044 node_ready.go:38] duration metric: took 29.0104766s for node "multinode-240000" to be "Ready" ...
	I0328 01:32:54.378542    6044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:54.378610    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:54.378685    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.378685    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.378737    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.384859    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:54.384859    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.384859    6044 round_trippers.go:580]     Audit-Id: 19f173d9-ef52-48a3-b9cc-4dffcab52055
	I0328 01:32:54.385857    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.385857    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.385880    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.385880    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.385880    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.387581    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2021"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86583 chars]
	I0328 01:32:54.391748    6044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:54.391748    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:54.391748    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.391748    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.391748    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.395448    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:54.395472    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.395472    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Audit-Id: 3725b522-3420-476f-a55c-b4d7982bcc4c
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.395472    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.396576    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:54.397141    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.397202    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.397202    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.397202    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.399419    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:54.399419    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.399419    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.399419    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Audit-Id: edd68f83-b434-48be-af9a-ecd6bf60b240
	I0328 01:32:54.400659    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:54.906747    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:54.906911    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.906974    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.906974    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.911719    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:54.912142    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Audit-Id: 1e7095a2-9b3b-4774-84ce-e770efebd411
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.912142    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.912224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.912224    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.912450    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:54.913169    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.913245    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.913245    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.913245    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.917492    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:54.917492    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.917692    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Audit-Id: ce1493d1-2898-48ae-be9a-69ddca902283
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.917692    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.917982    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:55.403614    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:55.403614    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.403614    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.403614    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.408143    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:55.408377    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Audit-Id: ab4ce259-c92b-4b83-afcf-b210dfc6a8f0
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.408377    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.408377    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.409051    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:55.409276    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:55.409276    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.409807    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.409807    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.412527    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:55.413523    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.413523    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.413523    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Audit-Id: 059bdf85-ec7f-4459-a750-3cb99cefc952
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.414274    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:55.904672    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:55.904672    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.904672    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.904672    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.912964    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:32:55.912964    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Audit-Id: 6c6b60a2-e463-4789-be2a-feba8b1868db
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.912964    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.912964    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.914035    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:55.914735    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:55.914735    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.914735    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.914735    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.918357    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:55.918357    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Audit-Id: 879bab20-7722-46cf-af8c-ce75fd3cb367
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.918357    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.918357    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.918357    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:56.406797    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:56.407056    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.407056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.407056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.411203    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:56.411714    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.411714    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.411714    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Audit-Id: d5f92fa5-70f5-4a26-881c-10fdf512c27d
	I0328 01:32:56.411797    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.411995    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:56.413361    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:56.413433    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.413433    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.413433    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.416684    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.416684    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Audit-Id: 2f11617b-d64e-457b-8ffe-8d453e97c402
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.416913    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.416913    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.417108    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:56.417948    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:32:56.898983    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:56.898983    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.898983    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.898983    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.902689    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.902689    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.902689    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.902689    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Audit-Id: 41d6fdc1-d5e3-4b76-8b9e-a50b670e6123
	I0328 01:32:56.903822    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:56.904646    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:56.904706    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.904706    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.904763    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.908598    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.908598    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.908598    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.908598    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Audit-Id: c4961037-0245-45ed-a04b-9fac0c93a93c
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.909677    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:57.399559    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:57.399559    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.399559    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.399559    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.404352    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:57.404352    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.404352    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Audit-Id: cf7c1809-fc3d-47db-8ce2-5272a2a7c5ce
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.404520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.405181    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:57.405900    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:57.405900    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.405981    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.405981    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.409899    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:57.410016    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.410016    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.410016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.410016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.410016    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.410084    6044 round_trippers.go:580]     Audit-Id: b03a9b6d-bd67-4b17-8efd-9b3b455a1572
	I0328 01:32:57.410084    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.410670    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:57.897474    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:57.897474    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.897474    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.897474    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.903263    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:57.903263    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.903538    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Audit-Id: 748225c4-3c08-4936-9e12-175c065f3d2e
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.903538    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.903538    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:57.904640    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:57.904640    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.904640    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.904640    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.908150    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:57.908150    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.908150    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.908150    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Audit-Id: 2c3ec668-b3e8-4c8b-9c9c-5b3a1735b5d1
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.908706    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.396873    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:58.396873    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.396873    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.397004    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.403321    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:58.403321    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.403868    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.403868    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Audit-Id: 0a72cd07-ff3e-4f68-bc87-9c7335ffa3e2
	I0328 01:32:58.404564    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:58.404790    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:58.404790    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.404790    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.404790    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.412933    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:58.412990    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.412990    6044 round_trippers.go:580]     Audit-Id: 1212ce18-dfad-4550-8df5-35ae43af75e6
	I0328 01:32:58.413056    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.413056    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.413056    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.413113    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.413113    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.413610    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.899376    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:58.899470    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.899470    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.899470    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.907264    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:58.907986    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.907986    6044 round_trippers.go:580]     Audit-Id: 9f3c64c9-ac43-46f0-8649-4c89cd65f0f4
	I0328 01:32:58.907986    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.908030    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.908030    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.908030    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.908030    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.908329    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:58.909219    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:58.909219    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.909219    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.909219    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.912323    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:58.912323    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Audit-Id: 8f88c7ab-2f34-48d1-820e-f358ede78d3c
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.912323    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.912323    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.913352    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.913352    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:32:59.399712    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:59.399712    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.399712    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.399712    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.408239    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:32:59.408239    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Audit-Id: 5736abbd-1de1-4609-86b4-09975187adcd
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.408239    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.408239    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.408985    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:59.409698    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:59.409698    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.409698    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.409698    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.412880    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:59.413075    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Audit-Id: 88e5acea-62da-4386-9b02-a84e57383345
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.413075    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.413075    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.413337    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:59.899875    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:59.899875    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.899962    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.899962    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.904286    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:59.904355    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Audit-Id: fcf91b63-1704-4e2b-b051-8e407f3f7bbd
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.904355    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.904355    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.904714    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:59.905638    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:59.905638    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.905732    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.905732    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.912934    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:59.912934    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Audit-Id: ed41d076-b6ea-43b2-a77c-993328bbda10
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.912934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.912934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.913329    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:00.398140    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:00.398140    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.398140    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.398140    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.404404    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:00.404404    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Audit-Id: a002eafa-4774-481e-9965-040115cbf507
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.404404    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.404404    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.404716    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:00.405571    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:00.405629    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.405629    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.405629    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.408500    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:00.408500    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.408500    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Audit-Id: e2474345-9ff7-46d6-845b-5362b91064f4
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.408500    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.409322    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:00.893960    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:00.893960    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.893960    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.893960    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.896670    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:00.896670    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Audit-Id: 8bc270aa-cd40-4ba2-b444-70c542cdeccc
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.896670    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.896670    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.897878    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:00.898163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:00.898690    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.898690    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.898690    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.905016    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:00.905016    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.905016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.905016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Audit-Id: 51621dec-19f7-4d1a-9bad-a4e49b91faef
	I0328 01:33:00.905016    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:01.406720    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:01.406771    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.406771    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.406771    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.411428    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:01.411523    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.411617    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.411617    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.411617    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.411617    6044 round_trippers.go:580]     Audit-Id: 9fe92bca-5113-47aa-811f-96768e8454d0
	I0328 01:33:01.411673    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.411673    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.411892    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:01.412538    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:01.412625    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.412625    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.412625    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.417300    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:01.417300    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.417300    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.417300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.417300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.417300    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.417365    6044 round_trippers.go:580]     Audit-Id: 97ce8b72-3122-4bfc-8f34-fd1e2260a5fb
	I0328 01:33:01.417365    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.417874    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:01.418278    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:01.892746    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:01.893056    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.893056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.893056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.899167    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:01.899239    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.899239    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.899302    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.899302    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.899323    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.899349    6044 round_trippers.go:580]     Audit-Id: e31782a6-5131-433f-bbab-bf66f5691ca8
	I0328 01:33:01.899349    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.900636    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:01.901405    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:01.901405    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.901405    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.901405    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.904309    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:01.904309    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.904309    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.904309    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Audit-Id: b59c095a-a7cd-406b-831a-12ca1bb45105
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.904309    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:02.396498    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:02.396498    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.396498    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.396498    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.405035    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:02.405554    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.405554    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.405554    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Audit-Id: f93fe61b-90bb-4702-8c88-88562368583b
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.405816    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:02.406537    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:02.406634    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.406634    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.406634    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.408926    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:02.408926    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.408926    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.408926    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.408926    6044 round_trippers.go:580]     Audit-Id: 00c67b79-3f4b-4576-8abe-3ef5f468e504
	I0328 01:33:02.409972    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.409972    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.410001    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.410179    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:02.897825    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:02.897825    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.897825    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.897825    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.902940    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:02.902940    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Audit-Id: 26fe1be7-6bf7-47c9-86fe-e84520b8f6d6
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.902940    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.902940    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.902940    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:02.903994    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:02.904077    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.904077    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.904077    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.906553    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:02.906553    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.907458    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Audit-Id: e50af020-fadd-4a9f-a213-740fe249642d
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.907458    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.907778    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.400938    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:03.400938    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.401076    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.401076    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.405050    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:03.405484    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.405484    6044 round_trippers.go:580]     Audit-Id: 8686030c-7e7a-4471-ab75-64385c8f9b00
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.405557    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.405557    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.405925    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:03.406655    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:03.406655    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.406655    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.406655    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.409080    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:03.409080    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.410021    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Audit-Id: 8aab82b8-1f18-412a-9775-bad27f6ea0c0
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.410102    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.410102    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.410166    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.900777    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:03.900890    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.900890    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.900890    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.907185    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:03.907185    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Audit-Id: 7b356ce8-a0f4-4104-a748-f7538e174307
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.907185    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.907185    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.907724    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:03.908063    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:03.908593    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.908593    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.908593    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.911910    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:03.911910    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.911910    6044 round_trippers.go:580]     Audit-Id: feb794f2-52ce-4747-beec-4c78cf33d607
	I0328 01:33:03.912164    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.912164    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.912199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.912199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.912199    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.912594    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.913101    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:04.400234    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:04.400500    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.400500    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.400500    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.404794    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:04.404794    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.404794    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Audit-Id: 1c1bcc28-b189-4ac0-8143-c89be2a65a82
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.405451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.405719    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:04.406970    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:04.406970    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.407069    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.407069    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.410321    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:04.410321    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.410321    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.410321    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Audit-Id: 31d9d99f-a0c7-4770-b695-0d3f1bae2718
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.410568    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.411057    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:04.905462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:04.905549    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.905549    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.905549    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.910298    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:04.910440    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Audit-Id: 00c89c4f-a678-4086-a51c-030ed1d62a3f
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.910525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.910525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.910525    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.910525    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:04.911576    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:04.911576    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.911576    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.911576    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.916761    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:04.916761    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.916761    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.916761    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Audit-Id: 7bf6e77b-66b3-41d7-ade2-7c62ba084289
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.917521    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:05.394936    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:05.395226    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.395226    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.395226    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.402804    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:05.402804    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.402804    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.402804    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Audit-Id: 3628962c-5e78-4933-a6ea-28deddcada1b
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.402804    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:05.404168    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:05.404198    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.404198    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.404246    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.407482    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:05.407482    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.407482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.407482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Audit-Id: f7737af8-af4a-44f1-8d71-8216c121aa27
	I0328 01:33:05.408709    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:05.894378    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:05.894378    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.894378    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.894378    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.899311    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:05.899311    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.899311    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.899311    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Audit-Id: bbe443c9-464b-48cc-9830-c308933e119c
	I0328 01:33:05.899311    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:05.900357    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:05.900357    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.900357    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.900357    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.906603    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:05.906603    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Audit-Id: 33f40ecd-6544-404f-8ef6-bd867ff9aa1b
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.906603    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.906603    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.908462    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:06.393392    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:06.393578    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.393578    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.393578    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.398134    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:06.398426    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Audit-Id: d6bd93ac-a1f3-4184-b3d3-139445514e8b
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.398426    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.398426    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.399075    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:06.399860    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:06.399931    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.399931    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.399931    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.405548    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:06.405548    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Audit-Id: 3de03604-9488-4bba-b335-568d07700fc6
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.405548    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.405548    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.405548    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:06.406308    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:06.897161    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:06.897240    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.897240    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.897240    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.901664    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:06.902004    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.902004    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.902004    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.902004    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.902004    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.902075    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.902075    6044 round_trippers.go:580]     Audit-Id: 09e2bfe4-7193-441f-a7d4-142f6ef5f67d
	I0328 01:33:06.902374    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:06.903175    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:06.903230    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.903230    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.903230    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.906790    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:06.906993    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.906993    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.906993    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.907116    6044 round_trippers.go:580]     Audit-Id: 72d28c5d-1b6b-4059-bbdf-8efe65038be0
	I0328 01:33:06.907230    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:07.394758    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:07.394758    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.394758    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.394841    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.402844    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:07.402844    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.402844    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.402844    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Audit-Id: dab4dcd8-b31a-46f3-bb65-03661761549c
	I0328 01:33:07.402844    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:07.403662    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:07.403662    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.403662    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.403662    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.407525    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:07.407525    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Audit-Id: 5f51400a-0468-4e96-9500-bccbe980ec0d
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.407525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.407525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.407525    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:07.907184    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:07.907184    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.907184    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.907184    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.911624    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:07.911624    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.911624    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.911624    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.911624    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Audit-Id: b9fce0ca-35b0-4919-afd8-ffa1781f256f
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.912689    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:07.913490    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:07.913490    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.913490    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.913490    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.916534    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:07.916534    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.916534    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.916534    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.916534    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Audit-Id: 526986ac-5300-411e-9287-5b02366af36c
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.916915    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:08.406447    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:08.406447    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.406447    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.406447    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.410044    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:08.410044    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Audit-Id: b8b4b0c5-1e81-4255-a450-51557f34af7b
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.410044    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.410044    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.411015    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.411015    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:08.412020    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:08.412020    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.412020    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.412101    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.414842    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:08.415686    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Audit-Id: 825aa512-2d91-44d7-819e-f5f725b4b3fa
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.415686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.415686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.415686    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:08.416398    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:08.905388    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:08.905388    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.905388    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.905388    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.910638    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:08.910638    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.910638    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.910716    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.910716    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Audit-Id: 0448ff0f-5b7f-453a-8d43-2d0a99f9c9a5
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.910988    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:08.911579    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:08.911579    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.911579    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.911579    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.915167    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:08.915520    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.915520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.915520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Audit-Id: 065889ff-c8f1-4fea-bed6-0b197eaf1adf
	I0328 01:33:08.915584    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.916162    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:09.404122    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:09.404122    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.404122    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.404244    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.411432    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:09.411432    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.411432    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Audit-Id: beba5670-36b3-4f4c-88a9-cb37450f7fde
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.411432    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.411432    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:09.412197    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:09.412197    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.412197    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.412197    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.416213    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:09.416213    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Audit-Id: a4293e46-855d-4402-b0ce-a079f60dadac
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.416329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.416405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.416405    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.416529    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:09.901462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:09.901462    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.901462    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.901462    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.906109    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:09.906109    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Audit-Id: 2b7b0d75-763f-4145-8144-f51c0108e6d3
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.906289    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.906289    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.906667    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:09.907366    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:09.907366    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.907366    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.907366    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.910515    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:09.910762    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.910762    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.910762    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Audit-Id: dca9629d-51c4-4aac-b69b-b21d22c4b13b
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.910968    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.402273    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:10.402273    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.402273    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.402273    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.407166    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:10.407233    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Audit-Id: 6e8a9639-cf75-4ac9-a2dd-1627b49fcb23
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.407233    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.407326    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.407865    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:10.408780    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:10.408914    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.408914    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.408914    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.413106    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:10.413872    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.413941    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.413941    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.413941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.413941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.414006    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.414006    6044 round_trippers.go:580]     Audit-Id: 140b7fe9-f27a-44f5-94dd-7b4bd588b7f1
	I0328 01:33:10.414193    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.900936    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:10.901042    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.901042    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.901042    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.904400    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:10.905417    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.905417    6044 round_trippers.go:580]     Audit-Id: 1d4ad733-fabb-42ac-aff2-4991466c2a27
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.905457    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.905457    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.905595    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:10.906177    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:10.906177    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.906177    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.906332    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.909462    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:10.909637    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.909637    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.909687    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Audit-Id: eed2ce8f-e4b0-4620-a161-def523ecc219
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.909734    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.909786    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.910316    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:11.400884    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:11.401017    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.401017    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.401017    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.404869    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.404869    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.404869    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Audit-Id: 0aeededd-4810-44a1-a3c2-c76b431c4c25
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.404869    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.405880    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:11.406616    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:11.406616    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.406703    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.406703    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.413693    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:11.413693    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.413693    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.413693    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Audit-Id: 58ef7b12-f119-4722-b747-8a16363daa76
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.414444    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:11.902841    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:11.902841    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.902841    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.902841    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.906923    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.906923    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Audit-Id: b5a2d79d-b633-495b-90f0-845476c889e0
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.906923    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.906923    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.907695    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:11.908306    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:11.908306    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.908306    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.908306    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.911341    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.911341    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.911341    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Audit-Id: babc38e8-2c37-45c8-9b07-4358e99bddfc
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.911524    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.911524    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.402309    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:12.402309    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.402309    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.402585    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.408245    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:12.408245    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Audit-Id: b63bf871-414a-4391-a6c9-281cb4fbecec
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.408245    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.408245    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.408462    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:12.409121    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:12.409121    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.409121    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.409281    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.412590    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:12.413273    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Audit-Id: ded0763b-5919-4e5e-9d9a-0bdb07a4d799
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.413348    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.413348    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.413348    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.413723    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.903090    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:12.903090    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.903090    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.903090    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.907498    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:12.907567    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.907567    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.907567    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.907567    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.907567    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.907673    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.907673    6044 round_trippers.go:580]     Audit-Id: 2ff537f2-9526-4276-b194-329111e0f0d0
	I0328 01:33:12.907868    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:12.908747    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:12.908747    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.908801    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.908801    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.911681    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:12.911775    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.911775    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.911873    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Audit-Id: f5e481d2-6523-4a24-8874-059452d457b6
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.911873    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.912427    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:13.406888    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:13.407011    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.407011    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.407011    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.411816    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:13.411816    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Audit-Id: cbcbd04d-51af-4b34-8fec-c404b9f30fd4
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.411816    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.412429    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.412646    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:13.413415    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:13.413415    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.413415    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.413415    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.416828    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:13.417159    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.417159    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Audit-Id: 818f870f-6a23-4ba4-a6b7-ebe91c798e4d
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.417159    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.417570    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:13.893748    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:13.893927    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.894021    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.894021    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.898411    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:13.898609    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Audit-Id: 851bb9f8-0217-4f89-baed-2375d6be7f1e
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.898681    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.898681    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.898849    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:13.899417    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:13.899417    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.899417    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.899417    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.903407    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:13.903834    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.903834    6044 round_trippers.go:580]     Audit-Id: 84e4f908-fb1d-49dd-8aa1-2b2d0694169c
	I0328 01:33:13.903834    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.903929    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.903929    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.903929    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.903929    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.903929    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:14.400857    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:14.400857    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.400857    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.400857    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.406335    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:14.406335    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.406335    6044 round_trippers.go:580]     Audit-Id: f91041b4-b7df-46c7-b2ce-21f57e4f686a
	I0328 01:33:14.406417    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.406437    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.406437    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.406437    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.406437    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.406744    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:14.407522    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:14.407522    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.407522    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.407522    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.413393    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:14.413393    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Audit-Id: 01fc3e27-b96a-4d67-aebf-f84e888032e9
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.413393    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.413393    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.413939    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:14.902432    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:14.902432    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.902432    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.902432    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.906322    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:14.906322    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Audit-Id: c22c3fea-044d-4505-b5fa-2e989436c0ca
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.906322    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.906322    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.906322    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:14.907299    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:14.907358    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.907358    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.907358    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.910387    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:14.910387    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.910387    6044 round_trippers.go:580]     Audit-Id: 5e175c59-36d0-4a78-8a41-8965adf2fd65
	I0328 01:33:14.910387    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.910463    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.910463    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.910463    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.910463    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.910641    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:15.401155    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:15.401233    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.401233    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.401233    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.405622    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:15.405645    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.405645    6044 round_trippers.go:580]     Audit-Id: 81ebdc8a-4c00-4f06-9298-3ec246091ca3
	I0328 01:33:15.405645    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.405713    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.405713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.405713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.405713    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.408206    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:15.408885    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:15.408885    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.408885    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.408885    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.413713    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:15.413713    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.413713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Audit-Id: 2f6e2aba-66bf-4043-9f07-b5ec38e3a574
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.413713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.413713    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:15.413713    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:15.897975    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:15.897975    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.897975    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.897975    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.901546    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:15.901546    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Audit-Id: bfbdb2fc-8fff-42bf-866c-5b1447aeef3d
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.901546    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.901546    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.903269    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:15.904312    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:15.904391    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.904391    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.904391    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.907292    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:15.907292    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Audit-Id: fb39b8c1-94a6-490c-8a6f-dfe0f15fdbb2
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.907826    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.907826    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.907980    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:16.395943    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:16.396218    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.396218    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.396218    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.402280    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:16.402280    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.402280    6044 round_trippers.go:580]     Audit-Id: e14f1b15-a98d-483f-b0f7-bf16f2ef0c7b
	I0328 01:33:16.402375    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.402375    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.402375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.402375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.402506    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.402795    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:16.403763    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:16.403763    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.403763    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.403763    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.406904    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:16.407997    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Audit-Id: 59bb2cf9-34ed-441b-ac48-6874b79ee56c
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.407997    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.408053    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.408053    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.408053    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:16.894960    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:16.894960    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.895224    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.895224    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.900541    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:16.900541    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Audit-Id: c66df5fa-e5fa-4a90-815c-5bf3ca6a9193
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.900630    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.900630    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.901187    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:16.901504    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:16.901504    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.901504    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.901504    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.905080    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:16.905241    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Audit-Id: 9578e60d-3511-47df-aa4a-d349f9c6e6ae
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.905241    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.905241    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.905511    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.399827    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:17.399827    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.399827    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.399827    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.407576    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:17.407576    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Audit-Id: f26e8de5-d5c3-4767-8bb7-a54885777109
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.407576    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.407576    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.407576    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:17.408546    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:17.408546    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.408546    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.408546    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.411301    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:17.411301    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.411301    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.411301    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Audit-Id: 30f0b1aa-bbba-47c2-bbf3-690d649f4bc0
	I0328 01:33:17.411301    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.901107    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:17.901107    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.901107    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.901107    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.906092    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:17.906290    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.906290    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.906290    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.906290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.906373    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.906373    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.906373    6044 round_trippers.go:580]     Audit-Id: 5196dee3-705b-4ac9-a1ec-8e5be88ae743
	I0328 01:33:17.906572    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:17.907433    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:17.907501    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.907501    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.907501    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.914444    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:17.914444    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Audit-Id: 58a5a43c-84a9-4925-a606-c4536f5d3546
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.914444    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.914444    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.914444    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.915370    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:18.402917    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:18.402917    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.402917    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.402917    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.407927    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:18.407927    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Audit-Id: e8e8cec9-4405-412b-ae2e-8591588baca6
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.408022    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.408022    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.408668    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:18.409411    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:18.409550    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.409550    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.409550    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.412808    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:18.413559    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.413559    6044 round_trippers.go:580]     Audit-Id: ac4cc8f8-0a5f-4d22-bb27-b9aea5861fd6
	I0328 01:33:18.413654    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.413654    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.413683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.413683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.413683    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.414482    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:18.905651    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:18.905651    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.905651    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.905651    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.909988    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:18.910574    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.910574    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.910574    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.910574    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.910574    6044 round_trippers.go:580]     Audit-Id: c6ba6769-7e4c-4ca6-8b95-1565d8e682a7
	I0328 01:33:18.910651    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.910651    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.911083    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:18.911473    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:18.911473    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.911473    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.911473    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.917722    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:18.917722    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Audit-Id: b1185063-2fdf-4360-89a9-b28a5a464a6a
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.917722    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.917722    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.917722    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:19.404803    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:19.404803    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.404803    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.404803    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.409290    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:19.409290    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.409290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Audit-Id: 05136cd0-871d-47a8-bece-44a7c2d54057
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.409290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.410568    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:19.411203    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:19.411203    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.411203    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.411203    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.414352    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:19.414352    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.414914    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.414914    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Audit-Id: 4c67372e-2985-4ee7-bce4-ef9ecdf18ed6
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.418276    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:19.899518    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:19.899518    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.899518    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.899518    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.904120    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:19.904120    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.904120    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.904120    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.904120    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Audit-Id: a1d3cb39-2d93-40ea-9f75-5e97c532f9a4
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.904960    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:19.905188    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:19.905188    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.905188    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.905188    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.908916    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:19.908916    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.909063    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.909063    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Audit-Id: c6081e4a-c066-40e4-b2e5-6dedf37b322b
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.909310    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:20.397745    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:20.397745    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.397745    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.397745    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.402460    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:20.402460    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Audit-Id: 9d63d476-4603-40a1-b6cf-ac7e3ab521b6
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.402460    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.402460    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.402958    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:20.403695    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:20.403695    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.403768    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.403768    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.407707    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:20.407707    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Audit-Id: 6c0d7ef7-701a-484e-bf66-ee1122400092
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.407707    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.407707    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.408442    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:20.408442    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:20.894156    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:20.894208    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.894249    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.894249    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.899615    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:20.899615    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Audit-Id: c9ea700e-6641-417f-9de5-079ee99cacad
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.899615    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.899615    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.899873    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:20.900575    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:20.900575    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.900630    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.900630    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.902864    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:20.902864    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Audit-Id: 470b4198-4f70-46bd-ade5-8c242c0f24b4
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.903854    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.903854    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.903854    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:21.392759    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:21.392876    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.392876    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.392876    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.398006    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:21.398079    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Audit-Id: c8ebdaaf-c590-4048-ae7b-d0ed8b58b9d5
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.398079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.398079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.398306    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:21.399252    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:21.399322    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.399322    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.399322    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.404540    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:21.404884    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Audit-Id: bf9f4e0b-1af1-4172-b6f2-ce6bc2ea962c
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.404884    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.404884    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.405286    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:21.905299    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:21.905299    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.905299    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.905299    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.910037    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:21.910936    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Audit-Id: 2a1d2e6c-530d-41fb-9cb0-98890abcd2ea
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.910936    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.910936    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.911431    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:21.912165    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:21.912165    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.912165    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.912165    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.915552    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:21.915552    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Audit-Id: 9f4530bb-3c9b-4280-a9c2-07399eb81622
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.915889    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.915889    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.916186    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:22.403352    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:22.403352    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.403544    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.403544    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.407852    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:22.408415    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.408415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.408415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Audit-Id: 606f40f5-7f9b-4288-b822-4d47454db001
	I0328 01:33:22.408604    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:22.409433    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:22.409433    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.409433    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.409433    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.415817    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:22.415817    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.415817    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.415817    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Audit-Id: 01e67ea8-6e97-4707-bd0b-476f219825e3
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.415817    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:22.416615    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:22.903274    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:22.903529    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.903529    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.903529    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.907957    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:22.907957    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.907957    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.907957    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.907957    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.907957    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.908620    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.908620    6044 round_trippers.go:580]     Audit-Id: bb826da2-ddc8-4349-8c9f-c4fb52a53976
	I0328 01:33:22.908922    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:22.910097    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:22.910097    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.910097    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.910097    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.912493    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:22.913485    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Audit-Id: 94b4cedf-100f-4aa1-aaac-f83031d5f39e
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.913485    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.913485    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.913766    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:23.404480    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:23.404480    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.404480    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.404480    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.409204    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:23.409204    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.409276    6044 round_trippers.go:580]     Audit-Id: 6276168e-37bf-492d-b48a-a9f66a3f87a6
	I0328 01:33:23.409276    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.409310    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.409310    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.409310    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.409310    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.409404    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:23.410204    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:23.410204    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.410204    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.410204    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.413003    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:23.413003    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.413003    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.413003    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Audit-Id: 649876e3-fcc5-4458-b4d8-c338999393e1
	I0328 01:33:23.413882    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:23.906653    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:23.906653    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.906653    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.906653    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.910896    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:23.910896    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Audit-Id: 018033d5-367b-4a13-a0da-f13d72f9fcef
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.910896    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.910896    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.911490    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:23.912553    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:23.913137    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.913137    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.913206    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.921862    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:23.921862    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.921862    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Audit-Id: 43cef98e-fc69-41a5-950f-6c2a290b1f05
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.921862    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.922403    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.397812    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:24.397812    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.397812    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.397812    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.402109    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:24.402109    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.402109    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.402109    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Audit-Id: a5e41747-1a4c-4ba8-9286-77e10147e999
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.402295    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:24.403163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:24.403163    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.403163    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.403163    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.410097    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:24.410661    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.410661    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.410661    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Audit-Id: b6255abf-63dc-4217-abcc-5ee0715dbc95
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.410829    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.902191    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:24.902191    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.902357    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.902357    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.907558    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:24.907558    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.907558    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.907558    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Audit-Id: deb81a15-3d88-4342-ba4f-e2e1ac4c1806
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.907683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.907889    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:24.908676    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:24.908676    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.908676    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.908676    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.913405    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:24.913405    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Audit-Id: d188cdbf-7fe0-4567-acf0-37d815bbd882
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.913405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.913405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.913949    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.914066    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:25.403099    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:25.403343    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.403343    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.403343    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.408053    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.408053    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.408053    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.408053    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.408145    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.408166    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.408166    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.408166    6044 round_trippers.go:580]     Audit-Id: 255f01ef-c25b-49a7-abdb-fa33cbfcf5ca
	I0328 01:33:25.408322    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:25.409120    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.409120    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.409120    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.409120    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.412953    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.413086    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.413086    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.413086    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Audit-Id: 1994d08b-72b6-43d6-856a-7a355a2b49c4
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.413177    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.413467    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.905013    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:25.905013    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.905236    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.905236    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.909050    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.909050    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Audit-Id: 5deb2128-2210-487a-b92f-aa7c2cdece70
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.909050    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.909050    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.910341    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0328 01:33:25.910711    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.910711    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.910711    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.910711    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.916312    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:25.916312    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Audit-Id: 2d0d6149-375c-4f70-bb45-ffa30adfe893
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.916410    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.916410    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.916410    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.916410    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.916997    6044 pod_ready.go:92] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.916997    6044 pod_ready.go:81] duration metric: took 31.5250368s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.916997    6044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.917162    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:33:25.917162    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.917162    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.917162    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.920588    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.920966    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.920966    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.920966    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Audit-Id: 59e230f4-b079-450c-bdec-30104df7caac
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.920966    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1963","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0328 01:33:25.921756    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.921756    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.921756    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.921756    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.924080    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:25.924080    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.924080    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.924080    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Audit-Id: 89b917da-6ab9-41dd-b17d-f464b23dec36
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.925230    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.925442    6044 pod_ready.go:92] pod "etcd-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.925442    6044 pod_ready.go:81] duration metric: took 8.4443ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.925442    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.925442    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:33:25.925442    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.925442    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.925442    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.928789    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.928789    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Audit-Id: fe7cf0ab-f8de-4b1f-b8e6-d3d60812f570
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.928789    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.928789    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.928789    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"8b9b4cf7-40b0-4a3e-96ca-28c934f9789a","resourceVersion":"1984","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.229.19:8443","kubernetes.io/config.hash":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.mirror":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.seen":"2024-03-28T01:32:13.677615805Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0328 01:33:25.928789    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.928789    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.928789    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.928789    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.931977    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.931977    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.931977    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.931977    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Audit-Id: 99e2ed29-b1ea-436e-8744-0217d01b6d3c
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.932851    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.932851    6044 pod_ready.go:92] pod "kube-apiserver-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.932851    6044 pod_ready.go:81] duration metric: took 7.409ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.932851    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.933385    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:33:25.933385    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.933385    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.933385    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.935852    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:25.936183    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Audit-Id: 31b40d30-0790-47d2-b4cb-f05e4189e561
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.936183    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.936183    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.936703    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"1953","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0328 01:33:25.936925    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.936925    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.936925    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.936925    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.940824    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.941259    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Audit-Id: 1e88c2ce-a8c0-476b-bc4d-cbef2355dc7b
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.941259    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.941259    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.941395    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.941395    6044 pod_ready.go:92] pod "kube-controller-manager-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.941395    6044 pod_ready.go:81] duration metric: took 8.5438ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.942059    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.942163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:33:25.942209    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.942249    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.942249    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.945485    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.945892    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.945892    6044 round_trippers.go:580]     Audit-Id: d8b3859d-a319-40e1-9edd-ab754e7b7412
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.945934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.945934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.946186    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"1926","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0328 01:33:25.946186    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.946186    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.946186    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.946186    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.959838    6044 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0328 01:33:25.959838    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.959838    6044 round_trippers.go:580]     Audit-Id: a6a22129-282e-46e4-a6d9-f8ae6fcb4f8a
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.959915    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.959915    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.960276    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.960576    6044 pod_ready.go:92] pod "kube-proxy-47rqg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.960576    6044 pod_ready.go:81] duration metric: took 18.5164ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.960576    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.107823    6044 request.go:629] Waited for 146.8931ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:33:26.107986    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:33:26.108079    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.108079    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.108079    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.112760    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.112839    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Audit-Id: 55311ac7-1fea-4d40-a4a9-0cd032216a29
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.112895    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.112895    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.112895    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55rch","generateName":"kube-proxy-","namespace":"kube-system","uid":"a96f841b-3e8f-42c1-be63-03914c0b90e8","resourceVersion":"1831","creationTimestamp":"2024-03-28T01:15:58Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:15:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:33:26.310240    6044 request.go:629] Waited for 196.3437ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:33:26.310452    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:33:26.310452    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.310452    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.310571    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.314877    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.314877    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Audit-Id: 5c6c493c-a45d-451e-ada2-b34620109013
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.314877    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.314877    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.315923    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m03","uid":"dbbc38c1-7871-4a48-98eb-4fd00b43bc22","resourceVersion":"2000","creationTimestamp":"2024-03-28T01:27:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_27_31_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:27:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4407 chars]
	I0328 01:33:26.316173    6044 pod_ready.go:97] node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:33:26.316173    6044 pod_ready.go:81] duration metric: took 355.5952ms for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	E0328 01:33:26.316173    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:33:26.316173    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.512974    6044 request.go:629] Waited for 196.7991ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:33:26.512974    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:33:26.512974    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.512974    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.512974    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.520672    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:26.521149    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.521149    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.521149    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.521149    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.521149    6044 round_trippers.go:580]     Audit-Id: 84904272-5dff-4ae6-98d0-edaa0989a44f
	I0328 01:33:26.521251    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.521251    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.521544    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t88gz","generateName":"kube-proxy-","namespace":"kube-system","uid":"695603ac-ab24-4206-9728-342b2af018e4","resourceVersion":"2046","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:33:26.715629    6044 request.go:629] Waited for 193.245ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:33:26.715629    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:33:26.715629    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.715860    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.715860    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.719480    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:26.719480    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.720051    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.720051    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.720105    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.720105    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.720105    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.720105    6044 round_trippers.go:580]     Audit-Id: db922d7b-6b81-4f10-97a8-3f415d74ee4d
	I0328 01:33:26.720105    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"2050","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4590 chars]
	I0328 01:33:26.720846    6044 pod_ready.go:97] node "multinode-240000-m02" hosting pod "kube-proxy-t88gz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m02" has status "Ready":"Unknown"
	I0328 01:33:26.720846    6044 pod_ready.go:81] duration metric: took 404.6697ms for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	E0328 01:33:26.720846    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m02" hosting pod "kube-proxy-t88gz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m02" has status "Ready":"Unknown"
	I0328 01:33:26.720846    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.916741    6044 request.go:629] Waited for 195.2064ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:33:26.916878    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:33:26.916878    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.916878    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.916878    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.921108    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.921108    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Audit-Id: 04001a40-3617-4aa9-afcf-461b32414f73
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.921108    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.921108    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.921908    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"1966","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0328 01:33:27.119643    6044 request.go:629] Waited for 197.429ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:27.119962    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:27.119962    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:27.119962    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:27.119962    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:27.123702    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:27.123702    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:27.123702    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:27.123702    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:27 GMT
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Audit-Id: 074c09fb-8199-48a4-9987-29d324e2b7af
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:27.124455    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:27.125162    6044 pod_ready.go:92] pod "kube-scheduler-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:27.125234    6044 pod_ready.go:81] duration metric: took 404.386ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:27.125234    6044 pod_ready.go:38] duration metric: took 32.7464721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:33:27.125300    6044 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:33:27.135988    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:27.167532    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:27.167532    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:27.178699    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:27.205577    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:27.205577    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:27.215601    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:27.244506    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:27.244506    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:27.244506    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:27.255096    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:27.280610    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:27.280610    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:27.280610    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:27.289627    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:27.316168    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:27.316168    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:27.316168    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:27.325446    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:27.356038    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:27.356038    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:27.356038    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:27.364608    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:27.395264    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:27.395264    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:27.395264    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:27.395264    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:27.395264    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:27.443760    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.472768    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:27.472768    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:27.515365    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515691    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515807    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516034    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516034    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516620    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:27.516896    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516896    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517034    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517034    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:27.517159    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517245    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517275    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:27.517296    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517876    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517876    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518061    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518148    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:27.518148    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518194    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518308    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520029    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520029    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520297    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:27.520321    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520321    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520886    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:27.521795    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521795    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521861    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521861    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521886    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521886    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522627    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.522627    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522756    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522756    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:27.522937    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523070    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523070    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523163    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523163    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523221    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:27.523221    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523292    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523326    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523459    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:27.523529    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:27.524219    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.544537    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:27.544537    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:27.811318    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:27.811389    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:27.811389    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:27.811642    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:27.811686    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.811775    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:27.811775    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:27.811775    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.811775    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.811775    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:27.811775    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.811775    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:19 +0000
	I0328 01:33:27.811775    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.811856    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:27.811856    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:27.811856    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:27.811918    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:27.811954    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:27.811954    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:27.811954    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.811954    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:27.811954    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:27.812015    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.812015    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.812042    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.812042    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.812042    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.812042    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.812042    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.812042    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.812107    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.812107    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.812107    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.812133    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.812133    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:27.812133    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.812133    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.812259    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.812282    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:27.812282    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:27.812282    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:27.812375    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.812375    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.812440    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:27.812440    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:27.812575    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.812598    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.812598    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:27.812598    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:27.812703    6044 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0328 01:33:27.812758    6044 command_runner.go:130] > Events:
	I0328 01:33:27.812758    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:27.812783    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:27.812783    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:27.812813    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:27.812813    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.812813    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.812813    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:27.812813    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:27.812813    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.812813    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.812813    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.812813    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:27.812813    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:27.812813    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:27.812813    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.813425    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:27.813425    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:27.813425    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.813425    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.813425    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.813425    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.813425    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.813425    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.813425    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.813581    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.813581    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.813581    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.813581    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.813581    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.813581    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.813581    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:27.813581    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:27.813581    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:27.813689    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.813689    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.813713    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:27.813713    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:27.813786    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:27.813786    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.813786    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.813786    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:27.813786    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0328 01:33:27.813892    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0328 01:33:27.813892    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.813892    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.813892    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:27.813892    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:27.813892    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:27.813965    6044 command_runner.go:130] > Events:
	I0328 01:33:27.813965    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:27.814028    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:27.814054    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:27.814054    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:27.814085    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:27.814085    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:27.814202    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:27.814256    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:27.814256    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.814256    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.814256    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:27.814256    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:27.814256    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.814256    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.814256    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:27.814256    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:27.814256    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.814256    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.814256    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.814256    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.814256    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:27.814786    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.814786    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.814786    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:27.815051    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:27.815051    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:27.815051    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.815051    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.815051    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0328 01:33:27.815156    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0328 01:33:27.815185    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.815185    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:27.815185    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:27.815185    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:27.815185    6044 command_runner.go:130] > Events:
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0328 01:33:27.815185    6044 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 5m54s                  kube-proxy       
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 17m                    kubelet          Starting kubelet.
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 5m57s                  kubelet          Starting kubelet.
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  RegisteredNode           5m53s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeReady                5m51s                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeNotReady             4m13s                  node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                    node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:27.826410    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:27.826410    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:27.860425    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:27.860801    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:27.861859    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:27.861859    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:27.861932    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:27.862022    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:27.862022    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:27.862050    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862050    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862050    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:27.862127    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862127    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:27.862127    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:27.862188    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:27.862248    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:27.862314    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862314    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862314    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:27.862314    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:27.862381    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862441    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862456    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:27.862456    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:27.862519    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:27.862585    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862610    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862610    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:27.863178    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:27.863178    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:27.863273    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:27.863273    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:27.863381    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:27.870477    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:27.870477    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:27.902923    6044 command_runner.go:130] > .:53
	I0328 01:33:27.902992    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:27.902992    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:27.902992    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:27.902992    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:27.903895    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:27.903960    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:27.935867    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.936873    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.936873    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.939860    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:27.939860    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:28.061097    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:28.061097    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:28.061097    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:28.061097    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:28.061097    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:28.061097    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:28.061097    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:28.061097    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:28.061097    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:28.061097    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:28.061097    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:28.061097    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:28.061625    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:28.064088    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:28.064088    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:28.098640    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:28.098906    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:28.098906    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:28.099505    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:28.099542    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:28.099579    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:28.099719    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:28.099762    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:28.099762    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:28.099800    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:28.099800    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:28.099862    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:28.099886    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:28.107018    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:28.107018    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:28.155039    6044 command_runner.go:130] > .:53
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:28.156040    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:28.156040    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:28.159082    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:28.159082    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.193509    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:28.193567    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:28.222698    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.222698    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:28.222748    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:28.222765    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:28.222765    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:28.223012    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:28.225626    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:28.225626    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:28.255210    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.255210    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:28.255641    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:28.255641    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:28.255694    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:28.255777    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:28.255777    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:28.255803    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:28.255920    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:28.256409    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:28.256433    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:28.256433    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:28.257002    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.257002    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:28.257002    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:28.257236    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:28.257261    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:28.257827    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:28.257827    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:28.257985    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:28.257985    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:28.258518    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:28.258545    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:28.259231    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:28.259402    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:28.259426    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259981    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:28.259981    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:28.260028    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:28.260028    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:28.260083    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:28.277311    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:28.277311    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:28.311663    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.311856    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.311956    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.312052    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.312144    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.312170    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:28.312221    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:28.312257    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:28.312294    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:28.312389    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:28.312412    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:28.312412    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:28.312467    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:28.313058    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313104    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313104    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:28.313162    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:28.313162    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:28.313302    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:28.314629    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:28.314629    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314693    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:28.315692    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:28.315692    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:28.315752    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.316369    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:28.316497    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:28.316527    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317270    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317270    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318022    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318080    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318142    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318142    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318207    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318207    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.318272    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.318272    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318375    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318430    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318494    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319118    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319174    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319174    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.319353    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:28.369437    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:28.369437    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:28.396447    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:28.398449    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:28.398449    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:28.430437    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:28.445442    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:28.445442    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:28.478931    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:28.478931    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:28.514708    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514708    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:28.515328    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:28.515444    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:28.515444    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:28.516030    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.516030    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.516077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:28.516077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516356    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516356    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516426    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516497    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516497    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516561    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:28.516740    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:28.516763    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:28.518453    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:28.518453    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:28.518809    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:28.519410    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519458    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519458    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519694    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519694    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519889    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519889    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:28.520062    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:28.520062    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:28.520177    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:28.520234    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:28.520289    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:28.520344    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:28.520344    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:28.520466    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:28.520654    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:28.520782    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:28.520825    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:28.520825    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:28.520884    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:28.520908    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:28.520908    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.521023    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521570    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521880    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522445    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522445    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523207    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523304    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.523418    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:31.076098    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:33:31.104780    6044 command_runner.go:130] > 2032
	I0328 01:33:31.104860    6044 api_server.go:72] duration metric: took 1m6.101039s to wait for apiserver process to appear ...
	I0328 01:33:31.104924    6044 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:33:31.116927    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:31.147305    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:31.147829    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:31.158823    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:31.192779    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:31.192779    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:31.201778    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:31.228900    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:31.228900    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:31.229950    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:31.239808    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:31.275805    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:31.275805    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:31.275904    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:31.285038    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:31.312354    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:31.312354    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:31.312456    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:31.322705    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:31.349305    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:31.349305    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:31.349305    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:31.358926    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:31.386018    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:31.386081    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:31.386081    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:31.386143    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:31.386143    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:31.416544    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:31.416888    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:31.417018    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:31.419910    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:31.419910    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:31.448577    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.452546    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:31.452546    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:31.543378    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:31.544304    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:31.577377    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:31.579311    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:31.579902    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:31.580146    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:31.580146    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580254    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.580254    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:31.580254    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580366    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:31.580366    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:31.580467    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:31.580467    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:31.580575    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:31.580575    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:31.580575    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:31.580575    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580748    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.580838    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:31.580838    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580933    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580933    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:31.580933    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:31.580933    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581053    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581158    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:31.581216    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581262    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581262    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:31.581446    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:31.581560    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:31.581598    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581690    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581736    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:31.581736    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581803    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581868    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:31.581868    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581927    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581927    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:31.581990    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:31.582048    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:31.582110    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582110    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:31.582168    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:31.582168    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:31.582232    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:31.582290    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582290    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.582413    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:31.582413    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582478    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:31.582606    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582663    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:31.582663    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.582728    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:31.582787    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:31.582787    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.582849    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:31.582888    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:31.584005    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:31.584119    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:31.584281    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:31.592104    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:31.592104    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:31.634287    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:31.634615    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:31.634757    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:31.634757    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:31.637280    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:31.638395    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:31.638477    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:31.638553    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:31.638573    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:31.639224    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:31.647861    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:31.647861    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:31.681339    6044 command_runner.go:130] > .:53
	I0328 01:33:31.681339    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:31.681339    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:31.681339    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:31.681339    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:31.681339    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:31.681339    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:31.721923    6044 command_runner.go:130] > .:53
	I0328 01:33:31.721995    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:31.721995    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:31.721995    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:31.722196    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:31.722278    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:31.722278    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:31.722330    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:31.722367    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:31.722367    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:31.722396    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:31.722422    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:31.722447    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:31.722447    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:31.722501    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:31.722526    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:31.722561    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:31.722593    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:31.722593    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:31.722621    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:31.722761    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:31.722803    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:31.722803    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:31.722919    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:31.722943    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:31.722943    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:31.722970    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:31.722970    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:31.725730    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:31.725730    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:31.757501    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:31.758322    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:31.758389    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:31.758389    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.758489    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:31.758489    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.759527    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:31.759527    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798171    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798342    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800269    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:31.800269    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.821684    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:31.821684    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:31.952615    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:31.952670    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:31.952670    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:31.952670    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:31.952670    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:31.952670    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:31.952670    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:31.952670    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:31.952670    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:31.952670    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:31.952886    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:31.952914    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:31.952914    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:31.952914    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:31.952914    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:31.952914    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:31.952914    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:31.955426    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:31.955500    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:31.994056    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:31.994570    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:31.994694    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.994798    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.995727    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.995727    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.995908    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995986    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995986    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996064    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996149    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.996149    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996749    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996797    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996797    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996867    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.997531    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.997531    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.997609    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997609    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997732    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.997732    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.997798    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.997798    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:32.008543    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:32.008543    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:32.046602    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046740    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:32.047327    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047327    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047377    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:32.047495    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047523    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:32.048954    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:32.049019    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:32.049083    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:32.049135    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:32.049222    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:32.049373    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:32.049373    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051155    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051155    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051379    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051379    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051572    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051596    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051596    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:32.051731    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:32.051759    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:32.051819    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:32.051844    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:32.051844    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:32.052231    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:32.052521    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:32.052716    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:32.052716    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.052752    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054220    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054220    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054265    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054967    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054993    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054993    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055043    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.055068    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.055068    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.091487    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:32.091487    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:32.368419    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:32.368465    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:32.368465    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.368465    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.368465    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:32.368547    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.368671    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.368671    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.368671    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:32.368671    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:32.368671    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.368671    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.368671    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:32.368671    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.368773    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:30 +0000
	I0328 01:33:32.368773    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:32.368773    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:32.368773    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:32.368773    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:32.368773    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:32.368773    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.368773    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.368773    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.368773    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.368773    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.369013    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.369013    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.369013    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.369013    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:32.369013    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.369013    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.369119    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.369148    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.369148    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.369218    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.369218    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.369218    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:32.369218    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:32.369277    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:32.369277    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.369277    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.369277    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         73s
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	I0328 01:33:32.369449    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	I0328 01:33:32.369468    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:32.369468    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	I0328 01:33:32.369526    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:32.369526    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.369526    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.369526    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:32.369592    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:32.369592    6044 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0328 01:33:32.369653    6044 command_runner.go:130] > Events:
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:32.369653    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369846    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369870    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369870    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:32.369897    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:32.369897    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.369897    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.369897    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:32.369897    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:32.369897    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.369897    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.369897    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:32.369897    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:32.369897    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:32.369897    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:32.370438    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.370438    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.370438    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.370438    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.370484    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.370484    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.370484    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.370484    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.370484    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.370529    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.370529    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.370556    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.370556    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.370556    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:32.370556    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.370623    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.370701    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:32.370701    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:32.370701    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:32.370701    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.370760    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.370760    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:32.370760    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0328 01:33:32.370760    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0328 01:33:32.370826    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.370826    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.370826    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:32.370826    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:32.370826    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:32.370884    6044 command_runner.go:130] > Events:
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:32.370884    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:32.371015    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:32.371015    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:32.371015    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.371015    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.371143    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.371220    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.371247    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.371247    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:32.371247    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:32.371247    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:32.371247    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.371247    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.371247    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:32.371330    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.371330    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:32.371330    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:32.371330    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:32.371330    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.371330    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.371330    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:32.371330    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.371330    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.371330    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:32.371330    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:32.371330    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.371330    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0328 01:33:32.371330    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0328 01:33:32.371330    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.371864    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:32.371864    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:32.371864    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:32.371907    6044 command_runner.go:130] > Events:
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0328 01:33:32.371907    6044 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Normal  Starting                 5m59s                kube-proxy       
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  Starting                 17m                  kubelet          Starting kubelet.
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m2s                 kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  RegisteredNode           5m58s                node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  NodeReady                5m56s                kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  NodeNotReady             4m18s                node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:32.382911    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:32.382911    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:32.418985    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:32.419060    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:32.419132    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.419201    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:32.419281    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:32.419342    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:32.420047    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:32.420142    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:32.420273    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:32.420273    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:32.420297    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:32.420854    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:32.420854    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:32.420939    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:32.420939    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:32.421005    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:32.421005    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:32.421079    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:32.421157    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:32.421157    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:32.421881    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:32.421905    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:32.421905    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:32.421953    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:32.422030    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:32.422621    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:32.422696    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:32.422696    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:32.422752    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:32.422752    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:32.422785    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:32.422785    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:32.422829    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:32.422829    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:32.422977    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:32.423712    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.424533    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:32.424533    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:32.424756    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424785    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:32.442682    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:32.442682    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:32.484700    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.491836    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.491915    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:32.491915    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:32.492041    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:32.492041    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.492413    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:32.492413    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.492470    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:32.492492    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.515107    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:32.515107    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:32.542162    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:32.542162    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:32.542379    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:32.542379    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:32.542458    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:32.542458    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:32.542498    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:32.542498    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:32.542498    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:32.542538    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:32.542538    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:32.542832    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:32.542832    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:32.544407    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:32.544407    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.091248    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:33:35.099170    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 200:
	ok
	I0328 01:33:35.099859    6044 round_trippers.go:463] GET https://172.28.229.19:8443/version
	I0328 01:33:35.099859    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:35.099859    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:35.099859    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:35.101522    6044 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0328 01:33:35.101522    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:35.101522    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Content-Length: 263
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:35 GMT
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Audit-Id: 1e18aebc-88d9-4bca-a454-127886c4f63d
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:35.102055    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:35.102055    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:35.102055    6044 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0328 01:33:35.102055    6044 api_server.go:141] control plane version: v1.29.3
	I0328 01:33:35.102055    6044 api_server.go:131] duration metric: took 3.9971042s to wait for apiserver health ...
	I0328 01:33:35.102055    6044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:33:35.113585    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:35.140665    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:35.141602    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:35.153084    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:35.179354    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:35.180316    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:35.194762    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:35.231631    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:35.231975    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:35.232319    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:35.243219    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:35.279599    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:35.279677    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:35.279743    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:35.289046    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:35.317591    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:35.321722    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:35.321722    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:35.332818    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:35.355279    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:35.355279    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:35.355279    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:35.365460    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:35.389132    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:35.389132    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:35.389132    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:35.389611    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:35.389611    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:35.419889    6044 command_runner.go:130] > .:53
	I0328 01:33:35.420772    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:35.420772    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:35.420772    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:35.420772    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:35.421089    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:35.421140    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:35.448022    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:35.448022    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.448513    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.448571    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.448571    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.448634    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.448696    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:35.448804    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.448963    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.448963    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449104    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.449253    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.449253    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449970    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.450039    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.450099    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:35.450141    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453421    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453561    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:35.453561    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453625    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453625    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453696    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453696    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453772    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:35.453831    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453831    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453893    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453893    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453970    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453970    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:35.454059    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454059    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454135    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454210    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454275    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454275    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:35.454353    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454353    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454442    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454481    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454525    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454525    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:35.454606    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454606    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454839    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454904    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454904    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454964    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:35.455056    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455056    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455114    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455114    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455175    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455231    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:35.455231    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455291    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455291    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455347    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455347    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455409    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:35.455409    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455550    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455681    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:35.455681    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455764    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456448    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456448    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:35.456505    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456505    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456570    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456623    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456623    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456677    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:35.456728    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456728    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456781    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456781    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456833    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456888    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:35.456888    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457018    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457018    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:35.457080    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457080    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457137    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457198    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457253    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457253    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:35.457313    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457313    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457368    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457368    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457428    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457428    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:35.457490    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457554    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457554    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457668    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457668    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457734    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:35.457734    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457793    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457793    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457856    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457856    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457912    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:35.457912    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457975    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457975    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458034    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458034    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458248    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:35.458292    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458332    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458937    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458937    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459748    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461191    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461247    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461247    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461304    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461304    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461898    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.481240    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:35.481240    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522200    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522200    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:35.523693    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523726    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523726    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524487    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524604    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524604    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524703    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:35.524703    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:35.525025    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:35.525025    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:35.525598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:35.525598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525852    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.525852    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526075    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526075    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526386    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526386    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526557    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526588    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527252    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:35.527960    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.528002    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.528058    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528058    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528111    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.528111    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.528168    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528433    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528494    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528494    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528550    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528608    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528608    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528664    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528664    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528722    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528722    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.562570    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:35.562570    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:35.591793    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:35.591793    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.594493    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:35.594565    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:35.626212    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:35.626212    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.627181    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:35.627181    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.661547    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:35.661741    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:35.662914    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:35.662978    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:35.663009    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:35.663009    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:35.663361    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:35.663361    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:35.663749    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:35.664832    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:35.665300    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:35.665395    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:35.665764    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:35.665764    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:35.665851    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:35.665851    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:35.665851    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:35.665964    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:35.665964    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:35.666192    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:35.666247    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:35.666310    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:35.666355    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:35.666526    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:35.666526    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:35.667037    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:35.667037    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:35.667098    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:35.667098    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:35.667152    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:35.667199    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:35.667234    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:35.667276    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:35.667330    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:35.667330    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:35.667384    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:35.667448    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:35.667511    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:35.667511    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:35.667591    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:35.667591    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:35.667682    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:35.667682    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:35.667746    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:35.667746    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:35.668005    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:35.668078    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:35.668078    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:35.668149    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:35.668205    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:35.668244    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668290    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668363    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668363    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:35.668427    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:35.668703    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:35.668703    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:35.668764    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:35.668764    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:35.668838    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:35.669012    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:35.669157    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:35.669215    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:35.669215    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:35.669281    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:35.669281    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:35.669340    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:35.669340    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:35.669403    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:35.669460    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:35.669460    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:35.669571    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:35.669638    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:35.669638    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:35.669703    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:35.669756    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:35.669860    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:35.669962    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:35.669962    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:35.670038    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:35.670107    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:35.670162    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:35.670209    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:35.670267    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:35.670328    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:35.670406    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:35.670466    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:35.670466    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:35.670539    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:35.670621    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:35.670621    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:35.670679    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:35.670741    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:35.670741    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:35.670809    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:35.670809    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:35.670992    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:35.671035    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:35.671082    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:35.671082    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:35.671150    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:35.671150    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:35.671214    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:35.671293    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.671293    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:35.671356    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:35.671412    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:35.671498    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:35.671596    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671646    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:35.671646    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:35.671750    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:35.691045    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:35.691045    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:35.792098    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:35.792235    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:35.792285    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:35.792285    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:35.792339    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:35.792339    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:35.792371    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:35.792371    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:35.792552    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:35.792552    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:35.792552    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:35.792621    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:35.792646    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:35.792700    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:35.795584    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:35.795696    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.833815    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.833815    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.833867    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:35.833867    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.833928    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:35.833962    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:35.834053    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:35.834119    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:35.834119    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:35.834207    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:35.834247    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:35.834288    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834326    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834520    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834558    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834614    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:35.834614    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:35.834655    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:35.834745    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:35.834846    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:35.834925    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:35.834978    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:35.835036    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:35.835036    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:35.835074    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:35.835123    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835123    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835195    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:35.835195    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:35.835278    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:35.835837    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:35.835880    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:35.835880    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:35.835978    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:35.836015    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:35.836064    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:35.836101    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:35.836101    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:35.836150    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:35.836186    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:35.836234    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836271    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836318    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836361    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836401    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837124    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:35.837270    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837309    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837360    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837396    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837443    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.837478    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.837478    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.837555    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:35.837555    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:35.837587    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:35.837623    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:35.837665    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:35.837665    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:35.837702    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:35.837762    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.837798    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:35.837798    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:35.837863    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:35.837863    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:35.837971    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:35.838169    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:35.838305    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838942    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.839771    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.839771    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839855    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839855    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839934    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840013    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840013    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840091    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840091    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.840199    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.840199    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840289    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840363    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840363    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840456    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840565    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840565    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840643    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840643    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841568    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.842181    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.842332    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:35.889902    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:35.889902    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:35.917732    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:35.918668    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:35.918711    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.918776    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:35.918846    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:35.919195    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919253    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:35.919348    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:35.919391    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:35.919434    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:35.919533    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:35.919590    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:35.919590    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919686    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919798    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919798    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:35.919798    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:35.919912    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919946    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919969    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:35.920017    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920050    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920154    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:35.920154    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:35.920267    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:35.920313    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:35.920351    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920408    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:35.920542    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:35.920628    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:35.921221    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:35.921221    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.921335    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:35.921335    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:35.921374    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:35.921374    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:35.921453    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:35.921492    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:35.921492    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:35.921582    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:35.921582    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:35.921582    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:35.921660    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:35.921660    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:35.921660    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:35.935247    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:35.935247    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:35.974717    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:35.974766    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:35.974832    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:35.974874    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:35.974936    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:35.974936    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:35.974975    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:35.974975    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:35.975063    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:35.975063    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:35.975112    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:35.975151    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:35.975201    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:35.975201    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:35.975239    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:35.975283    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:35.975322    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:35.975371    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:35.975543    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:35.975589    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:35.975629    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:35.975665    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:35.975665    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:35.975845    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:35.975845    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:35.987092    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:35.987092    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:36.020438    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.020501    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:36.020565    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020565    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:36.020624    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:36.020647    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:36.020647    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.020708    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:36.020708    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.020742    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.022012    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.022056    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.022122    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.022164    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.022164    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.022242    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.022242    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022558    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022558    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.022786    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022786    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.022951    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.022951    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.023129    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.023129    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:36.023164    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:36.023164    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.023164    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:36.034103    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:36.034103    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:36.066295    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.066295    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:36.066343    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:36.066926    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:36.066926    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:36.067131    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:36.067672    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:36.067695    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:36.067695    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:36.067764    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:36.067764    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:36.067949    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:36.068028    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:36.068105    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:36.068130    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:36.068130    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:36.068182    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:36.068182    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:36.068297    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:36.068405    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:36.068405    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:36.068466    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:36.068485    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:36.068485    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:36.068841    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:36.068918    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:36.069264    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:36.069342    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069342    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:36.069342    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:36.069405    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:36.069481    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:36.069505    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:36.069505    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:36.069557    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:36.069644    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:36.069644    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:36.069666    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:36.069721    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:36.070255    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:36.070255    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:36.070299    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:36.070349    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:36.070349    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:36.070409    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:36.070409    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:36.070443    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:36.070443    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:36.070535    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:36.070535    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:36.070566    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:36.070599    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:36.070650    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:36.070650    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:36.070835    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070926    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070926    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070997    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:36.070997    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:36.071032    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.071087    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:36.071087    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:36.071142    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:36.071142    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:36.071228    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:36.071228    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:36.071269    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:36.071320    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:36.071361    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:36.071405    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:36.071405    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:36.071978    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:36.072085    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072119    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:36.072161    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:36.072161    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:36.072252    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:36.072391    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:36.072391    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:36.072442    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:36.072442    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073013    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:36.073190    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:36.073190    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.094272    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:36.094272    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:36.122078    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:36.122278    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:36.122344    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:36.122344    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:36.122618    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:36.122618    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:36.122618    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:36.122940    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:36.122940    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:36.122940    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:36.123124    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:36.123172    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:36.123276    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:36.123332    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:36.123504    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:36.123625    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:36.123767    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:36.123767    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:36.126127    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:36.126127    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:36.372929    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:36.372929    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:36.372929    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.374024    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.374081    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.374081    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:36.374116    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:36.374116    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.374158    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:36.374158    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:36.374200    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.374200    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.374200    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.374252    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:36.374252    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:36.374252    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.374252    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.374252    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:36.374252    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.374252    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:30 +0000
	I0328 01:33:36.374327    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.374327    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:36.374354    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:36.374383    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:36.374383    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:36.374383    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:36.374383    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:36.374383    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.374383    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.374383    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.374383    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.374383    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:36.374383    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:36.374383    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.374383    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         77s
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	I0328 01:33:36.374383    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:36.374907    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:36.374907    6044 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0328 01:33:36.374907    6044 command_runner.go:130] > Events:
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:36.374987    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:36.374987    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:36.374987    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:36.374987    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.374987    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.374987    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:36.375539    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:36.375539    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:36.375539    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.375539    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.375539    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:36.375539    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.375539    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:36.375539    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.375539    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:36.375539    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:36.375539    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375539    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:36.375754    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:36.375754    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.375754    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.375754    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.375754    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.375754    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.375754    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.375839    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.375839    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:36.375839    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.375839    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.375902    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:36.375902    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:36.375902    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:36.375970    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.375970    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.375998    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0328 01:33:36.375998    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	I0328 01:33:36.375998    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	I0328 01:33:36.375998    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.376060    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.376078    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:36.376078    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:36.376102    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:36.376102    6044 command_runner.go:130] > Events:
	I0328 01:33:36.376102    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:36.376163    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:36.376223    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.376223    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:36.376319    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:36.376345    6044 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:36.376373    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:36.376373    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:36.376373    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.376451    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.376451    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.376485    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.376485    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:36.376485    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:36.376485    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.376613    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.376613    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:36.376634    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.376634    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:36.376634    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.376634    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:36.376705    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:36.376705    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376705    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.376772    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:36.376772    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:36.376772    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.376772    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.376772    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.376772    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.376861    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.376861    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.376861    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.376861    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.376861    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.376935    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:36.376935    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.376991    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.376991    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.377018    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.377018    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.377048    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.377048    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.377048    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:36.377080    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:36.377080    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:36.377115    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.377143    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0328 01:33:36.377143    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0328 01:33:36.377143    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.377143    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:36.377143    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:36.377143    6044 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0328 01:33:36.377143    6044 command_runner.go:130] > Events:
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 6m3s                 kube-proxy       
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 17m                  kubelet          Starting kubelet.
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 6m6s                 kubelet          Starting kubelet.
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  RegisteredNode           6m2s                 node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeReady                6m                   kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeNotReady             4m22s                node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:36.389824    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:36.389824    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:36.421793    6044 command_runner.go:130] > .:53
	I0328 01:33:36.422599    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:36.422599    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:36.422632    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:36.426193    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:36.426193    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:36.454246    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.454307    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:36.456978    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.456978    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:36.457301    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.460469    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:36.460579    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:36.491736    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:36.492649    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:36.492649    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:36.492732    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:36.492772    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.492825    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:36.492825    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.492865    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.492915    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:36.492956    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.492998    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493197    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493284    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493318    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845005       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845083       1 main.go:227] handling current node
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845096       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845121       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845312       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845670       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:39.000229    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:33:39.000229    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.000326    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.000326    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.005527    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.005527    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.005527    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.005527    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Audit-Id: 19660e92-c8d3-4a64-8bd9-49db821e51ec
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.006196    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.008456    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86569 chars]
	I0328 01:33:39.012910    6044 system_pods.go:59] 12 kube-system pods found
	I0328 01:33:39.012910    6044 system_pods.go:61] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:33:39.012910    6044 system_pods.go:74] duration metric: took 3.9108292s to wait for pod list to return data ...
	I0328 01:33:39.012910    6044 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:33:39.013456    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/default/serviceaccounts
	I0328 01:33:39.013544    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.013544    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.013625    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.016380    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:39.017375    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.017375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.017375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Content-Length: 262
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Audit-Id: 31e7c8c6-5a9d-471c-a868-4b3dc01b7a5f
	I0328 01:33:39.017464    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.017464    6044 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8bb5dc68-e1fd-49c8-89aa-9b79f7d72fc2","resourceVersion":"356","creationTimestamp":"2024-03-28T01:07:44Z"}}]}
	I0328 01:33:39.017763    6044 default_sa.go:45] found service account: "default"
	I0328 01:33:39.017763    6044 default_sa.go:55] duration metric: took 4.8529ms for default service account to be created ...
	I0328 01:33:39.017763    6044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:33:39.017901    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:33:39.017901    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.017968    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.017968    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.023920    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.023920    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.023920    6044 round_trippers.go:580]     Audit-Id: b473a2ab-262a-4079-9b41-2e393370c4d3
	I0328 01:33:39.023920    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.024731    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.024731    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.024731    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.024731    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.026919    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86569 chars]
	I0328 01:33:39.032531    6044 system_pods.go:86] 12 kube-system pods found
	I0328 01:33:39.032531    6044 system_pods.go:89] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:33:39.032531    6044 system_pods.go:126] duration metric: took 14.7682ms to wait for k8s-apps to be running ...
	I0328 01:33:39.032531    6044 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:33:39.046305    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:33:39.075188    6044 system_svc.go:56] duration metric: took 42.6564ms WaitForService to wait for kubelet
	I0328 01:33:39.075340    6044 kubeadm.go:576] duration metric: took 1m14.0713139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:33:39.075340    6044 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:33:39.075505    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes
	I0328 01:33:39.075505    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.075577    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.075577    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.080842    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.080842    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.081011    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.081011    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Audit-Id: 1ab1d709-f2ac-46f2-9f18-885f95182cd9
	I0328 01:33:39.081454    6044 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 16280 chars]
	I0328 01:33:39.082071    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082651    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082706    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082706    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:105] duration metric: took 7.3005ms to run NodePressure ...
	I0328 01:33:39.082706    6044 start.go:240] waiting for startup goroutines ...
	I0328 01:33:39.082706    6044 start.go:245] waiting for cluster config update ...
	I0328 01:33:39.082706    6044 start.go:254] writing updated cluster config ...
	I0328 01:33:39.088662    6044 out.go:177] 
	I0328 01:33:39.091921    6044 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:33:39.100986    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:33:39.101311    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:33:39.106290    6044 out.go:177] * Starting "multinode-240000-m02" worker node in "multinode-240000" cluster
	I0328 01:33:39.109822    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:33:39.109822    6044 cache.go:56] Caching tarball of preloaded images
	I0328 01:33:39.109822    6044 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:33:39.110488    6044 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:33:39.110488    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:33:39.112601    6044 start.go:360] acquireMachinesLock for multinode-240000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:33:39.112601    6044 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-240000-m02"
	I0328 01:33:39.113502    6044 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:33:39.113502    6044 fix.go:54] fixHost starting: m02
	I0328 01:33:39.113502    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:41.501367    6044 main.go:141] libmachine: [stdout =====>] : Off
	
	I0328 01:33:41.501764    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:41.501764    6044 fix.go:112] recreateIfNeeded on multinode-240000-m02: state=Stopped err=<nil>
	W0328 01:33:41.501764    6044 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:33:41.505415    6044 out.go:177] * Restarting existing hyperv VM for "multinode-240000-m02" ...
	I0328 01:33:41.510146    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000-m02
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:44.760660    6044 main.go:141] libmachine: Waiting for host to start...
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:33:49.931320    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:49.931416    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:50.945836    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:53.344974    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:53.345732    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:53.345732    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:33:56.068896    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:56.069154    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:57.075928    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:59.398589    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:59.398737    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:59.398737    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:34:02.061556    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:34:02.061748    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:03.067243    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:34:08.059910    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:34:08.060961    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:09.076183    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:34:11.429485    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:34:11.429635    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:11.429745    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-240000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-240000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-240000: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-240000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-240000	172.28.227.122
multinode-240000-m02	172.28.230.250
multinode-240000-m03	172.28.224.172

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-240000 -n multinode-240000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-240000 -n multinode-240000: (13.0542443s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 logs -n 25: (11.7441136s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	| cp      | multinode-240000 cp testdata\cp-test.txt                                                                                 | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:19 UTC | 28 Mar 24 01:19 UTC |
	|         | multinode-240000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:19 UTC | 28 Mar 24 01:20 UTC |
	|         | multinode-240000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:20 UTC | 28 Mar 24 01:20 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m02.txt |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:20 UTC | 28 Mar 24 01:20 UTC |
	|         | multinode-240000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:20 UTC | 28 Mar 24 01:20 UTC |
	|         | multinode-240000:/home/docker/cp-test_multinode-240000-m02_multinode-240000.txt                                          |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:20 UTC | 28 Mar 24 01:20 UTC |
	|         | multinode-240000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n multinode-240000 sudo cat                                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:20 UTC | 28 Mar 24 01:21 UTC |
	|         | /home/docker/cp-test_multinode-240000-m02_multinode-240000.txt                                                           |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:21 UTC | 28 Mar 24 01:21 UTC |
	|         | multinode-240000-m03:/home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt                                  |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:21 UTC | 28 Mar 24 01:21 UTC |
	|         | multinode-240000-m02 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n multinode-240000-m03 sudo cat                                                                    | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:21 UTC | 28 Mar 24 01:21 UTC |
	|         | /home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt                                                       |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp testdata\cp-test.txt                                                                                 | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:21 UTC | 28 Mar 24 01:21 UTC |
	|         | multinode-240000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:21 UTC | 28 Mar 24 01:22 UTC |
	|         | multinode-240000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m03.txt |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	|         | multinode-240000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	|         | multinode-240000:/home/docker/cp-test_multinode-240000-m03_multinode-240000.txt                                          |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:22 UTC |
	|         | multinode-240000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n multinode-240000 sudo cat                                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:22 UTC | 28 Mar 24 01:23 UTC |
	|         | /home/docker/cp-test_multinode-240000-m03_multinode-240000.txt                                                           |                  |                   |                |                     |                     |
	| cp      | multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt                                                        | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:23 UTC | 28 Mar 24 01:23 UTC |
	|         | multinode-240000-m02:/home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt                                  |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n                                                                                                  | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:23 UTC | 28 Mar 24 01:23 UTC |
	|         | multinode-240000-m03 sudo cat                                                                                            |                  |                   |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |                |                     |                     |
	| ssh     | multinode-240000 ssh -n multinode-240000-m02 sudo cat                                                                    | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:23 UTC | 28 Mar 24 01:23 UTC |
	|         | /home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt                                                       |                  |                   |                |                     |                     |
	| node    | multinode-240000 node stop m03                                                                                           | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:23 UTC | 28 Mar 24 01:24 UTC |
	| node    | multinode-240000 node start                                                                                              | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:25 UTC | 28 Mar 24 01:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |                |                     |                     |
	| node    | list -p multinode-240000                                                                                                 | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:28 UTC |                     |
	| stop    | -p multinode-240000                                                                                                      | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:28 UTC | 28 Mar 24 01:29 UTC |
	| start   | -p multinode-240000                                                                                                      | multinode-240000 | minikube6\jenkins | v1.33.0-beta.0 | 28 Mar 24 01:30 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |                |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |                |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/28 01:30:00
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0328 01:30:00.313275    6044 out.go:291] Setting OutFile to fd 972 ...
	I0328 01:30:00.313275    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:30:00.313275    6044 out.go:304] Setting ErrFile to fd 968...
	I0328 01:30:00.313275    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:30:00.337998    6044 out.go:298] Setting JSON to false
	I0328 01:30:00.341994    6044 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12061,"bootTime":1711577338,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0328 01:30:00.342153    6044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0328 01:30:00.458190    6044 out.go:177] * [multinode-240000] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0328 01:30:00.607515    6044 notify.go:220] Checking for updates...
	I0328 01:30:00.653360    6044 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:30:00.766456    6044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0328 01:30:00.956146    6044 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0328 01:30:01.014359    6044 out.go:177]   - MINIKUBE_LOCATION=18485
	I0328 01:30:01.258189    6044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0328 01:30:01.322877    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:30:01.323187    6044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0328 01:30:07.308307    6044 out.go:177] * Using the hyperv driver based on existing profile
	I0328 01:30:07.316021    6044 start.go:297] selected driver: hyperv
	I0328 01:30:07.316898    6044 start.go:901] validating driver "hyperv" against &{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:30:07.316984    6044 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0328 01:30:07.376110    6044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:30:07.377440    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:30:07.377440    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:30:07.377673    6044 start.go:340] cluster config:
	{Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.227.122 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:30:07.377673    6044 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0328 01:30:07.513634    6044 out.go:177] * Starting "multinode-240000" primary control-plane node in "multinode-240000" cluster
	I0328 01:30:07.670409    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:30:07.670830    6044 preload.go:147] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0328 01:30:07.670906    6044 cache.go:56] Caching tarball of preloaded images
	I0328 01:30:07.671334    6044 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:30:07.671600    6044 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:30:07.671600    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:30:07.675183    6044 start.go:360] acquireMachinesLock for multinode-240000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:30:07.675393    6044 start.go:364] duration metric: took 210.3µs to acquireMachinesLock for "multinode-240000"
	I0328 01:30:07.675608    6044 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:30:07.675708    6044 fix.go:54] fixHost starting: 
	I0328 01:30:07.676667    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:10.633072    6044 main.go:141] libmachine: [stdout =====>] : Off
	
	I0328 01:30:10.633538    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:10.633538    6044 fix.go:112] recreateIfNeeded on multinode-240000: state=Stopped err=<nil>
	W0328 01:30:10.633538    6044 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:30:10.637851    6044 out.go:177] * Restarting existing hyperv VM for "multinode-240000" ...
	I0328 01:30:10.641170    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000
	I0328 01:30:13.842787    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:13.842787    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:13.842787    6044 main.go:141] libmachine: Waiting for host to start...
	I0328 01:30:13.843043    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:16.229995    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:16.229995    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:16.230332    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:18.893212    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:18.893212    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:19.908866    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:22.292946    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:25.082635    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:25.083520    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:26.084474    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:28.446937    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:31.181702    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:31.181702    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:32.189615    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:34.529122    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:34.529525    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:34.529525    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:37.218113    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:30:37.218113    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:38.223978    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:40.572558    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:40.572558    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:40.573122    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:43.307092    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:43.307092    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:43.309887    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:45.582154    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:48.299861    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:48.299861    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:48.300290    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:30:48.303469    6044 machine.go:94] provisionDockerMachine start ...
	I0328 01:30:48.303469    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:50.561613    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:50.561613    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:50.562693    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:53.317669    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:53.317819    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:53.324574    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:30:53.325237    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:30:53.325237    6044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0328 01:30:53.466835    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0328 01:30:53.467015    6044 buildroot.go:166] provisioning hostname "multinode-240000"
	I0328 01:30:53.467099    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:30:55.689924    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:30:55.689924    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:55.690673    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:30:58.389933    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:30:58.389933    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:30:58.395412    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:30:58.396746    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:30:58.396888    6044 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-240000 && echo "multinode-240000" | sudo tee /etc/hostname
	I0328 01:30:58.564031    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-240000
	
	I0328 01:30:58.564031    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:00.811138    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:00.811368    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:00.811452    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:03.509452    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:03.509531    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:03.515796    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:03.516104    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:03.516104    6044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-240000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-240000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-240000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0328 01:31:03.670779    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0328 01:31:03.670779    6044 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0328 01:31:03.670779    6044 buildroot.go:174] setting up certificates
	I0328 01:31:03.670779    6044 provision.go:84] configureAuth start
	I0328 01:31:03.670779    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:05.907361    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:05.907361    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:05.908344    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:08.669793    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:08.669793    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:08.670703    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:10.883725    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:10.884309    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:10.884497    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:13.604385    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:13.605031    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:13.605211    6044 provision.go:143] copyHostCerts
	I0328 01:31:13.605288    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0328 01:31:13.605288    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0328 01:31:13.605288    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0328 01:31:13.606136    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0328 01:31:13.606902    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0328 01:31:13.607696    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0328 01:31:13.607696    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0328 01:31:13.607696    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0328 01:31:13.609005    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0328 01:31:13.609241    6044 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0328 01:31:13.609241    6044 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0328 01:31:13.609590    6044 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0328 01:31:13.610710    6044 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-240000 san=[127.0.0.1 172.28.229.19 localhost minikube multinode-240000]
	I0328 01:31:13.916678    6044 provision.go:177] copyRemoteCerts
	I0328 01:31:13.931112    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0328 01:31:13.931295    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:16.173641    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:16.173641    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:16.173935    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:18.890759    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:18.891588    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:18.891995    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:31:18.998828    6044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0676811s)
	I0328 01:31:18.998828    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0328 01:31:18.998828    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0328 01:31:19.049980    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0328 01:31:19.049980    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0328 01:31:19.100749    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0328 01:31:19.101170    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0328 01:31:19.152754    6044 provision.go:87] duration metric: took 15.4818698s to configureAuth
	I0328 01:31:19.152957    6044 buildroot.go:189] setting minikube options for container-runtime
	I0328 01:31:19.153486    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:31:19.153657    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:21.481248    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:21.481248    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:21.481399    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:24.249457    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:24.249457    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:24.256249    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:24.257214    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:24.257214    6044 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0328 01:31:24.387228    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0328 01:31:24.387228    6044 buildroot.go:70] root file system type: tmpfs
	I0328 01:31:24.387518    6044 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0328 01:31:24.387602    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:26.668994    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:26.669143    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:26.669143    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:29.382845    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:29.382845    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:29.390386    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:29.390557    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:29.390557    6044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0328 01:31:29.549421    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0328 01:31:29.550025    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:31.809789    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:31.810462    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:31.810462    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:34.516698    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:34.517804    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:34.523304    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:34.524045    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:34.524045    6044 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0328 01:31:37.114381    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0328 01:31:37.114381    6044 machine.go:97] duration metric: took 48.8105807s to provisionDockerMachine
	I0328 01:31:37.114381    6044 start.go:293] postStartSetup for "multinode-240000" (driver="hyperv")
	I0328 01:31:37.114381    6044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0328 01:31:37.128277    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0328 01:31:37.128277    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:39.380911    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:39.381266    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:39.381709    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:42.076488    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:42.076488    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:42.077192    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:31:42.179970    6044 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0516588s)
	I0328 01:31:42.194768    6044 ssh_runner.go:195] Run: cat /etc/os-release
	I0328 01:31:42.201744    6044 command_runner.go:130] > NAME=Buildroot
	I0328 01:31:42.201744    6044 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0328 01:31:42.201744    6044 command_runner.go:130] > ID=buildroot
	I0328 01:31:42.201744    6044 command_runner.go:130] > VERSION_ID=2023.02.9
	I0328 01:31:42.201744    6044 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0328 01:31:42.201848    6044 info.go:137] Remote host: Buildroot 2023.02.9
	I0328 01:31:42.201959    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0328 01:31:42.202609    6044 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0328 01:31:42.204213    6044 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> 104602.pem in /etc/ssl/certs
	I0328 01:31:42.204213    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /etc/ssl/certs/104602.pem
	I0328 01:31:42.218315    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0328 01:31:42.238227    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /etc/ssl/certs/104602.pem (1708 bytes)
	I0328 01:31:42.286689    6044 start.go:296] duration metric: took 5.1722726s for postStartSetup
	I0328 01:31:42.286829    6044 fix.go:56] duration metric: took 1m34.6105776s for fixHost
	I0328 01:31:42.286921    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:44.532150    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:44.532150    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:44.532926    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:47.278447    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:47.279303    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:47.284914    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:47.285607    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:47.285607    6044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0328 01:31:47.426555    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711589507.440502788
	
	I0328 01:31:47.426555    6044 fix.go:216] guest clock: 1711589507.440502788
	I0328 01:31:47.426555    6044 fix.go:229] Guest: 2024-03-28 01:31:47.440502788 +0000 UTC Remote: 2024-03-28 01:31:42.2868296 +0000 UTC m=+102.161341801 (delta=5.153673188s)
	I0328 01:31:47.426555    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:49.682881    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:49.682881    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:49.683884    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:52.425647    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:52.425719    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:52.431477    6044 main.go:141] libmachine: Using SSH client type: native
	I0328 01:31:52.432491    6044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x12a9f80] 0x12acb60 <nil>  [] 0s} 172.28.229.19 22 <nil> <nil>}
	I0328 01:31:52.432491    6044 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1711589507
	I0328 01:31:52.585055    6044 main.go:141] libmachine: SSH cmd err, output: <nil>: Thu Mar 28 01:31:47 UTC 2024
	
	I0328 01:31:52.585119    6044 fix.go:236] clock set: Thu Mar 28 01:31:47 UTC 2024
	 (err=<nil>)
	I0328 01:31:52.585119    6044 start.go:83] releasing machines lock for "multinode-240000", held for 1m44.9089567s
	I0328 01:31:52.585343    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:54.877318    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:54.877318    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:54.878144    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:57.574828    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:31:57.575213    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:57.579532    6044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0328 01:31:57.579740    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:57.592077    6044 ssh_runner.go:195] Run: cat /version.json
	I0328 01:31:57.592077    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:59.893152    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:31:59.924231    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:32:02.721963    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:32:02.722061    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:32:02.722061    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:32:02.752414    6044 main.go:141] libmachine: [stdout =====>] : 172.28.229.19
	
	I0328 01:32:02.752484    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:32:02.752832    6044 sshutil.go:53] new ssh client: &{IP:172.28.229.19 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:32:02.999378    6044 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0328 01:32:02.999378    6044 command_runner.go:130] > {"iso_version": "v1.33.0-1711559712-18485", "kicbase_version": "v0.0.43-beta.0", "minikube_version": "v1.33.0-beta.0", "commit": "db97f5257476488cfa11a4cd2d95d2aa6fbd9d33"}
	I0328 01:32:02.999378    6044 ssh_runner.go:235] Completed: cat /version.json: (5.4072639s)
	I0328 01:32:02.999378    6044 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.4196768s)
	I0328 01:32:03.014095    6044 ssh_runner.go:195] Run: systemctl --version
	I0328 01:32:03.024552    6044 command_runner.go:130] > systemd 252 (252)
	I0328 01:32:03.024629    6044 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0328 01:32:03.038984    6044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0328 01:32:03.048495    6044 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0328 01:32:03.048812    6044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0328 01:32:03.061124    6044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0328 01:32:03.095375    6044 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0328 01:32:03.095375    6044 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0328 01:32:03.095375    6044 start.go:494] detecting cgroup driver to use...
	I0328 01:32:03.095375    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:32:03.135848    6044 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0328 01:32:03.149781    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0328 01:32:03.186891    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0328 01:32:03.209913    6044 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0328 01:32:03.222677    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0328 01:32:03.256516    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:32:03.290819    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0328 01:32:03.324261    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0328 01:32:03.358770    6044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0328 01:32:03.396649    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0328 01:32:03.429320    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0328 01:32:03.464518    6044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0328 01:32:03.500988    6044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0328 01:32:03.521856    6044 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0328 01:32:03.535123    6044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0328 01:32:03.567280    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:03.780537    6044 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0328 01:32:03.818293    6044 start.go:494] detecting cgroup driver to use...
	I0328 01:32:03.831473    6044 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0328 01:32:03.853864    6044 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0328 01:32:03.854614    6044 command_runner.go:130] > [Unit]
	I0328 01:32:03.854614    6044 command_runner.go:130] > Description=Docker Application Container Engine
	I0328 01:32:03.854614    6044 command_runner.go:130] > Documentation=https://docs.docker.com
	I0328 01:32:03.854614    6044 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0328 01:32:03.854614    6044 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0328 01:32:03.854614    6044 command_runner.go:130] > StartLimitBurst=3
	I0328 01:32:03.854614    6044 command_runner.go:130] > StartLimitIntervalSec=60
	I0328 01:32:03.854614    6044 command_runner.go:130] > [Service]
	I0328 01:32:03.854614    6044 command_runner.go:130] > Type=notify
	I0328 01:32:03.854614    6044 command_runner.go:130] > Restart=on-failure
	I0328 01:32:03.854614    6044 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0328 01:32:03.855705    6044 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0328 01:32:03.855747    6044 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0328 01:32:03.855844    6044 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0328 01:32:03.855844    6044 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0328 01:32:03.856011    6044 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0328 01:32:03.856069    6044 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0328 01:32:03.856069    6044 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0328 01:32:03.856069    6044 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0328 01:32:03.856125    6044 command_runner.go:130] > ExecStart=
	I0328 01:32:03.856125    6044 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0328 01:32:03.856171    6044 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0328 01:32:03.856171    6044 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0328 01:32:03.856171    6044 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitNOFILE=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitNPROC=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > LimitCORE=infinity
	I0328 01:32:03.856171    6044 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0328 01:32:03.856254    6044 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0328 01:32:03.856297    6044 command_runner.go:130] > TasksMax=infinity
	I0328 01:32:03.856297    6044 command_runner.go:130] > TimeoutStartSec=0
	I0328 01:32:03.856297    6044 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0328 01:32:03.856297    6044 command_runner.go:130] > Delegate=yes
	I0328 01:32:03.856297    6044 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0328 01:32:03.856297    6044 command_runner.go:130] > KillMode=process
	I0328 01:32:03.856297    6044 command_runner.go:130] > [Install]
	I0328 01:32:03.856359    6044 command_runner.go:130] > WantedBy=multi-user.target
	I0328 01:32:03.869208    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:32:03.911638    6044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0328 01:32:03.958364    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0328 01:32:03.998450    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:32:04.037925    6044 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0328 01:32:04.102633    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0328 01:32:04.127879    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0328 01:32:04.162952    6044 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0328 01:32:04.176493    6044 ssh_runner.go:195] Run: which cri-dockerd
	I0328 01:32:04.182665    6044 command_runner.go:130] > /usr/bin/cri-dockerd
	I0328 01:32:04.195266    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0328 01:32:04.214250    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0328 01:32:04.259955    6044 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0328 01:32:04.477140    6044 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0328 01:32:04.675026    6044 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0328 01:32:04.675299    6044 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0328 01:32:04.724853    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:04.935415    6044 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0328 01:32:07.626086    6044 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6906528s)
	I0328 01:32:07.640068    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0328 01:32:07.679186    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:32:07.717414    6044 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0328 01:32:07.926863    6044 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0328 01:32:08.138067    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:08.356866    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0328 01:32:08.400987    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0328 01:32:08.441537    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:08.668166    6044 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0328 01:32:08.776719    6044 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0328 01:32:08.787947    6044 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0328 01:32:08.796951    6044 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0328 01:32:08.796951    6044 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0328 01:32:08.796951    6044 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0328 01:32:08.796951    6044 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0328 01:32:08.796951    6044 command_runner.go:130] > Access: 2024-03-28 01:32:08.707789032 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] > Modify: 2024-03-28 01:32:08.707789032 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] > Change: 2024-03-28 01:32:08.712789044 +0000
	I0328 01:32:08.796951    6044 command_runner.go:130] >  Birth: -
	I0328 01:32:08.797625    6044 start.go:562] Will wait 60s for crictl version
	I0328 01:32:08.809376    6044 ssh_runner.go:195] Run: which crictl
	I0328 01:32:08.814383    6044 command_runner.go:130] > /usr/bin/crictl
	I0328 01:32:08.827985    6044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0328 01:32:08.907335    6044 command_runner.go:130] > Version:  0.1.0
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeName:  docker
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeVersion:  26.0.0
	I0328 01:32:08.907335    6044 command_runner.go:130] > RuntimeApiVersion:  v1
	I0328 01:32:08.907335    6044 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.0
	RuntimeApiVersion:  v1
	I0328 01:32:08.916322    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:32:08.948368    6044 command_runner.go:130] > 26.0.0
	I0328 01:32:08.960332    6044 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0328 01:32:08.995021    6044 command_runner.go:130] > 26.0.0
	I0328 01:32:09.002324    6044 out.go:204] * Preparing Kubernetes v1.29.3 on Docker 26.0.0 ...
	I0328 01:32:09.002324    6044 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0328 01:32:09.006798    6044 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:26:7a:39 Flags:up|broadcast|multicast|running}
	I0328 01:32:09.009358    6044 ip.go:210] interface addr: fe80::e3e0:8483:9c84:940f/64
	I0328 01:32:09.009358    6044 ip.go:210] interface addr: 172.28.224.1/20
	I0328 01:32:09.021885    6044 ssh_runner.go:195] Run: grep 172.28.224.1	host.minikube.internal$ /etc/hosts
	I0328 01:32:09.028375    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.224.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:32:09.052344    6044 kubeadm.go:877] updating cluster {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0328 01:32:09.052710    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:32:09.062677    6044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:32:09.088599    6044 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0328 01:32:09.088599    6044 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0328 01:32:09.088801    6044 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:32:09.088801    6044 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0328 01:32:09.088801    6044 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:32:09.088891    6044 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0328 01:32:09.089966    6044 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0328 01:32:09.089966    6044 docker.go:615] Images already preloaded, skipping extraction
	I0328 01:32:09.101153    6044 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.29.3
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.29.3
	I0328 01:32:09.127910    6044 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.29.3
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.29.3
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0328 01:32:09.128129    6044 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0328 01:32:09.128129    6044 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0328 01:32:09.128129    6044 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0328 01:32:09.128129    6044 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0328 01:32:09.128295    6044 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.29.3
	registry.k8s.io/kube-controller-manager:v1.29.3
	registry.k8s.io/kube-scheduler:v1.29.3
	registry.k8s.io/kube-proxy:v1.29.3
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0328 01:32:09.128378    6044 cache_images.go:84] Images are preloaded, skipping loading
	I0328 01:32:09.128404    6044 kubeadm.go:928] updating node { 172.28.229.19 8443 v1.29.3 docker true true} ...
	I0328 01:32:09.128470    6044 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-240000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.229.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0328 01:32:09.138576    6044 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0328 01:32:09.177529    6044 command_runner.go:130] > cgroupfs
	I0328 01:32:09.177776    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:32:09.177776    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:32:09.177776    6044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0328 01:32:09.177858    6044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.229.19 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-240000 NodeName:multinode-240000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.229.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.229.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0328 01:32:09.177912    6044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.229.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-240000"
	  kubeletExtraArgs:
	    node-ip: 172.28.229.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0328 01:32:09.190631    6044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubeadm
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubectl
	I0328 01:32:09.211817    6044 command_runner.go:130] > kubelet
	I0328 01:32:09.211895    6044 binaries.go:44] Found k8s binaries, skipping transfer
	I0328 01:32:09.224707    6044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0328 01:32:09.244507    6044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0328 01:32:09.276515    6044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0328 01:32:09.310052    6044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0328 01:32:09.359381    6044 ssh_runner.go:195] Run: grep 172.28.229.19	control-plane.minikube.internal$ /etc/hosts
	I0328 01:32:09.365947    6044 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.229.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0328 01:32:09.400512    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:09.613176    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:32:09.645629    6044 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000 for IP: 172.28.229.19
	I0328 01:32:09.645701    6044 certs.go:194] generating shared ca certs ...
	I0328 01:32:09.645763    6044 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.646236    6044 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0328 01:32:09.646952    6044 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0328 01:32:09.647228    6044 certs.go:256] generating profile certs ...
	I0328 01:32:09.648024    6044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\client.key
	I0328 01:32:09.648225    6044 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa
	I0328 01:32:09.648381    6044 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.229.19]
	I0328 01:32:09.881762    6044 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa ...
	I0328 01:32:09.881762    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa: {Name:mk672bbda5084fd4479fd4bd1f8ff61e22b38a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.882343    6044 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa ...
	I0328 01:32:09.883365    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa: {Name:mk17e009729aae4c06ec0571ea6c00ff1f08753a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:09.883605    6044 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt.fbd45dfa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt
	I0328 01:32:09.895434    6044 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key.fbd45dfa -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key
	I0328 01:32:09.896420    6044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key
	I0328 01:32:09.896420    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0328 01:32:09.897470    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0328 01:32:09.897495    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0328 01:32:09.897804    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0328 01:32:09.898064    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0328 01:32:09.898287    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0328 01:32:09.898447    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0328 01:32:09.898579    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem (1338 bytes)
	W0328 01:32:09.898785    6044 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460_empty.pem, impossibly tiny 0 bytes
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0328 01:32:09.898785    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0328 01:32:09.900091    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0328 01:32:09.900801    6044 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem (1708 bytes)
	I0328 01:32:09.901047    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem -> /usr/share/ca-certificates/104602.pem
	I0328 01:32:09.901316    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:09.901530    6044 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem -> /usr/share/ca-certificates/10460.pem
	I0328 01:32:09.903022    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0328 01:32:09.955883    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0328 01:32:10.011738    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0328 01:32:10.067517    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0328 01:32:10.128505    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0328 01:32:10.176844    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0328 01:32:10.229773    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0328 01:32:10.285499    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0328 01:32:10.342232    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\104602.pem --> /usr/share/ca-certificates/104602.pem (1708 bytes)
	I0328 01:32:10.394173    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0328 01:32:10.448053    6044 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\10460.pem --> /usr/share/ca-certificates/10460.pem (1338 bytes)
	I0328 01:32:10.496984    6044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0328 01:32:10.546538    6044 ssh_runner.go:195] Run: openssl version
	I0328 01:32:10.559981    6044 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0328 01:32:10.574581    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10460.pem && ln -fs /usr/share/ca-certificates/10460.pem /etc/ssl/certs/10460.pem"
	I0328 01:32:10.608039    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.615597    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.615654    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:40 /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.628386    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10460.pem
	I0328 01:32:10.637673    6044 command_runner.go:130] > 51391683
	I0328 01:32:10.649972    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10460.pem /etc/ssl/certs/51391683.0"
	I0328 01:32:10.682992    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/104602.pem && ln -fs /usr/share/ca-certificates/104602.pem /etc/ssl/certs/104602.pem"
	I0328 01:32:10.717560    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.725835    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.725835    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:40 /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.739278    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/104602.pem
	I0328 01:32:10.748756    6044 command_runner.go:130] > 3ec20f2e
	I0328 01:32:10.761511    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/104602.pem /etc/ssl/certs/3ec20f2e.0"
	I0328 01:32:10.794098    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0328 01:32:10.829212    6044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.837233    6044 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.838335    6044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:37 /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.850220    6044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0328 01:32:10.861221    6044 command_runner.go:130] > b5213941
	I0328 01:32:10.873258    6044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0328 01:32:10.910968    6044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:32:10.919865    6044 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0328 01:32:10.919865    6044 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0328 01:32:10.919950    6044 command_runner.go:130] > Device: 8,1	Inode: 4196142     Links: 1
	I0328 01:32:10.919974    6044 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 01:32:10.919974    6044 command_runner.go:130] > Access: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.919974    6044 command_runner.go:130] > Modify: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.920036    6044 command_runner.go:130] > Change: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.920036    6044 command_runner.go:130] >  Birth: 2024-03-28 01:07:17.262283006 +0000
	I0328 01:32:10.936931    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0328 01:32:10.949507    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:10.965306    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0328 01:32:10.978035    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:10.993060    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0328 01:32:11.004113    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.017702    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0328 01:32:11.028884    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.043422    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0328 01:32:11.054378    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.067575    6044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0328 01:32:11.083623    6044 command_runner.go:130] > Certificate will not expire
	I0328 01:32:11.084158    6044 kubeadm.go:391] StartCluster: {Name:multinode-240000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
9.3 ClusterName:multinode-240000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.230.250 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.224.172 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0328 01:32:11.095332    6044 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 01:32:11.133216    6044 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0328 01:32:11.155454    6044 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0328 01:32:11.155510    6044 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0328 01:32:11.155579    6044 command_runner.go:130] > /var/lib/minikube/etcd:
	I0328 01:32:11.155579    6044 command_runner.go:130] > member
	W0328 01:32:11.155644    6044 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0328 01:32:11.155749    6044 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0328 01:32:11.155792    6044 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0328 01:32:11.169709    6044 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0328 01:32:11.189381    6044 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:32:11.190796    6044 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-240000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:11.190963    6044 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-240000" cluster setting kubeconfig missing "multinode-240000" context setting]
	I0328 01:32:11.192114    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:11.205920    6044 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:11.207123    6044 kapi.go:59] client config for multinode-240000: &rest.Config{Host:"https://172.28.229.19:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-240000/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26ab500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0328 01:32:11.208671    6044 cert_rotation.go:137] Starting client certificate rotation controller
	I0328 01:32:11.223482    6044 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:32:11.245634    6044 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0328 01:32:11.245725    6044 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0328 01:32:11.245725    6044 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0328 01:32:11.245725    6044 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0328 01:32:11.245725    6044 command_runner.go:130] >  kind: InitConfiguration
	I0328 01:32:11.245725    6044 command_runner.go:130] >  localAPIEndpoint:
	I0328 01:32:11.245725    6044 command_runner.go:130] > -  advertiseAddress: 172.28.227.122
	I0328 01:32:11.245807    6044 command_runner.go:130] > +  advertiseAddress: 172.28.229.19
	I0328 01:32:11.245807    6044 command_runner.go:130] >    bindPort: 8443
	I0328 01:32:11.245851    6044 command_runner.go:130] >  bootstrapTokens:
	I0328 01:32:11.245851    6044 command_runner.go:130] >    - groups:
	I0328 01:32:11.245851    6044 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0328 01:32:11.245851    6044 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0328 01:32:11.245851    6044 command_runner.go:130] >    name: "multinode-240000"
	I0328 01:32:11.245851    6044 command_runner.go:130] >    kubeletExtraArgs:
	I0328 01:32:11.245851    6044 command_runner.go:130] > -    node-ip: 172.28.227.122
	I0328 01:32:11.245851    6044 command_runner.go:130] > +    node-ip: 172.28.229.19
	I0328 01:32:11.245851    6044 command_runner.go:130] >    taints: []
	I0328 01:32:11.245851    6044 command_runner.go:130] >  ---
	I0328 01:32:11.245851    6044 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0328 01:32:11.245851    6044 command_runner.go:130] >  kind: ClusterConfiguration
	I0328 01:32:11.245851    6044 command_runner.go:130] >  apiServer:
	I0328 01:32:11.245851    6044 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.227.122"]
	I0328 01:32:11.245851    6044 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	I0328 01:32:11.245851    6044 command_runner.go:130] >    extraArgs:
	I0328 01:32:11.245851    6044 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0328 01:32:11.245851    6044 command_runner.go:130] >  controllerManager:
	I0328 01:32:11.245851    6044 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.227.122
	+  advertiseAddress: 172.28.229.19
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-240000"
	   kubeletExtraArgs:
	-    node-ip: 172.28.227.122
	+    node-ip: 172.28.229.19
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.227.122"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.229.19"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0328 01:32:11.245851    6044 kubeadm.go:1154] stopping kube-system containers ...
	I0328 01:32:11.255514    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0328 01:32:11.284915    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:32:11.285010    6044 command_runner.go:130] > d02996b2d57b
	I0328 01:32:11.285010    6044 command_runner.go:130] > 28426f4e9df5
	I0328 01:32:11.285010    6044 command_runner.go:130] > 6b6f67390b07
	I0328 01:32:11.285010    6044 command_runner.go:130] > dc9808261b21
	I0328 01:32:11.285010    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:32:11.285055    6044 command_runner.go:130] > 5d9ed3a20e88
	I0328 01:32:11.285055    6044 command_runner.go:130] > 6ae82cd0a848
	I0328 01:32:11.285055    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:32:11.285055    6044 command_runner.go:130] > 7061eab02790
	I0328 01:32:11.285055    6044 command_runner.go:130] > a01212226d03
	I0328 01:32:11.285055    6044 command_runner.go:130] > 66f15076d344
	I0328 01:32:11.285055    6044 command_runner.go:130] > 763932cfdf0b
	I0328 01:32:11.285102    6044 command_runner.go:130] > 7415d077c6f8
	I0328 01:32:11.285102    6044 command_runner.go:130] > ec77663c174f
	I0328 01:32:11.285102    6044 command_runner.go:130] > 20ff2ecb3a6d
	I0328 01:32:11.285143    6044 docker.go:483] Stopping containers: [29e516c918ef d02996b2d57b 28426f4e9df5 6b6f67390b07 dc9808261b21 bb0b3c542264 5d9ed3a20e88 6ae82cd0a848 1aa05268773e 7061eab02790 a01212226d03 66f15076d344 763932cfdf0b 7415d077c6f8 ec77663c174f 20ff2ecb3a6d]
	I0328 01:32:11.295385    6044 ssh_runner.go:195] Run: docker stop 29e516c918ef d02996b2d57b 28426f4e9df5 6b6f67390b07 dc9808261b21 bb0b3c542264 5d9ed3a20e88 6ae82cd0a848 1aa05268773e 7061eab02790 a01212226d03 66f15076d344 763932cfdf0b 7415d077c6f8 ec77663c174f 20ff2ecb3a6d
	I0328 01:32:11.327545    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:32:11.327545    6044 command_runner.go:130] > d02996b2d57b
	I0328 01:32:11.327545    6044 command_runner.go:130] > 28426f4e9df5
	I0328 01:32:11.327545    6044 command_runner.go:130] > 6b6f67390b07
	I0328 01:32:11.327545    6044 command_runner.go:130] > dc9808261b21
	I0328 01:32:11.327545    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:32:11.327545    6044 command_runner.go:130] > 5d9ed3a20e88
	I0328 01:32:11.327545    6044 command_runner.go:130] > 6ae82cd0a848
	I0328 01:32:11.327545    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:32:11.327545    6044 command_runner.go:130] > 7061eab02790
	I0328 01:32:11.328529    6044 command_runner.go:130] > a01212226d03
	I0328 01:32:11.328529    6044 command_runner.go:130] > 66f15076d344
	I0328 01:32:11.328529    6044 command_runner.go:130] > 763932cfdf0b
	I0328 01:32:11.328529    6044 command_runner.go:130] > 7415d077c6f8
	I0328 01:32:11.328581    6044 command_runner.go:130] > ec77663c174f
	I0328 01:32:11.328581    6044 command_runner.go:130] > 20ff2ecb3a6d
	I0328 01:32:11.342451    6044 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0328 01:32:11.392958    6044 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0328 01:32:11.413368    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0328 01:32:11.413488    6044 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:32:11.413769    6044 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0328 01:32:11.413769    6044 kubeadm.go:156] found existing configuration files:
	
	I0328 01:32:11.426984    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0328 01:32:11.445904    6044 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:32:11.446787    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0328 01:32:11.458764    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0328 01:32:11.493311    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0328 01:32:11.510433    6044 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:32:11.510905    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0328 01:32:11.524531    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0328 01:32:11.556540    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0328 01:32:11.575511    6044 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:32:11.575511    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0328 01:32:11.588024    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0328 01:32:11.620473    6044 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0328 01:32:11.639916    6044 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:32:11.640222    6044 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0328 01:32:11.654943    6044 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0328 01:32:11.690184    6044 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0328 01:32:11.716592    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0328 01:32:12.005834    6044 command_runner.go:130] > [certs] Using the existing "sa" key
	I0328 01:32:12.005834    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.096426    6044 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0328 01:32:13.096507    6044 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0328 01:32:13.096620    6044 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0328 01:32:13.096816    6044 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0328 01:32:13.096873    6044 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.090975s)
	I0328 01:32:13.096924    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0328 01:32:13.429850    6044 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0328 01:32:13.429850    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0328 01:32:13.548135    6044 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0328 01:32:13.548135    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:13.671844    6044 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0328 01:32:13.672006    6044 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:32:13.684817    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:14.198663    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:14.688002    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.196828    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.683176    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:32:15.712815    6044 command_runner.go:130] > 2032
	I0328 01:32:15.712815    6044 api_server.go:72] duration metric: took 2.040873s to wait for apiserver process to appear ...
	I0328 01:32:15.712912    6044 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:32:15.712969    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.325528    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:32:19.325627    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:32:19.325627    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.386465    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0328 01:32:19.386465    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0328 01:32:19.719238    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:19.731650    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:19.731650    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:20.227123    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:20.235291    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:20.235397    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:20.721486    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:20.740353    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0328 01:32:20.740450    6044 api_server.go:103] status: https://172.28.229.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0328 01:32:21.216756    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:32:21.228799    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 200:
	ok
	I0328 01:32:21.229301    6044 round_trippers.go:463] GET https://172.28.229.19:8443/version
	I0328 01:32:21.229301    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:21.229301    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:21.229301    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:21.248951    6044 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0328 01:32:21.248951    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:21.248951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:21.248951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Content-Length: 263
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:21 GMT
	I0328 01:32:21.248951    6044 round_trippers.go:580]     Audit-Id: 72f12dac-ee55-42f0-9a97-040c7c2de65f
	I0328 01:32:21.248951    6044 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0328 01:32:21.248951    6044 api_server.go:141] control plane version: v1.29.3
	I0328 01:32:21.248951    6044 api_server.go:131] duration metric: took 5.5360011s to wait for apiserver health ...
	I0328 01:32:21.248951    6044 cni.go:84] Creating CNI manager for ""
	I0328 01:32:21.248951    6044 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0328 01:32:21.251958    6044 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0328 01:32:21.266957    6044 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0328 01:32:21.275833    6044 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0328 01:32:21.275906    6044 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0328 01:32:21.275962    6044 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0328 01:32:21.275962    6044 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0328 01:32:21.275998    6044 command_runner.go:130] > Access: 2024-03-28 01:30:40.390507300 +0000
	I0328 01:32:21.276021    6044 command_runner.go:130] > Modify: 2024-03-27 22:52:09.000000000 +0000
	I0328 01:32:21.276021    6044 command_runner.go:130] > Change: 2024-03-28 01:30:30.450000000 +0000
	I0328 01:32:21.276042    6044 command_runner.go:130] >  Birth: -
	I0328 01:32:21.277142    6044 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0328 01:32:21.277211    6044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0328 01:32:21.343342    6044 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0328 01:32:22.787502    6044 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0328 01:32:22.788080    6044 command_runner.go:130] > daemonset.apps/kindnet configured
	I0328 01:32:22.788171    6044 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4448194s)
	I0328 01:32:22.788283    6044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:32:22.788283    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:22.788283    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:22.788283    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:22.788283    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:22.795882    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:22.795882    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:22.795882    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:22.795882    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:22 GMT
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Audit-Id: f306f21b-0c65-49da-bcda-4f2fd057ce7d
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:22.795882    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:22.797867    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1942"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87144 chars]
	I0328 01:32:22.803872    6044 system_pods.go:59] 12 kube-system pods found
	I0328 01:32:22.803872    6044 system_pods.go:61] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0328 01:32:22.804853    6044 system_pods.go:61] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:32:22.804853    6044 system_pods.go:61] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0328 01:32:22.804853    6044 system_pods.go:61] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:32:22.804853    6044 system_pods.go:74] duration metric: took 16.5698ms to wait for pod list to return data ...
	I0328 01:32:22.804853    6044 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:32:22.804853    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes
	I0328 01:32:22.804853    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:22.804853    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:22.804853    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:22.811154    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:22.811154    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:22.811154    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:22.811154    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:22 GMT
	I0328 01:32:22.811154    6044 round_trippers.go:580]     Audit-Id: 4dc287fb-d2c0-4dd5-9300-dae5b03bdc7f
	I0328 01:32:22.811154    6044 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1942"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 15651 chars]
	I0328 01:32:22.812753    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812811    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812872    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:32:22.812872    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:32:22.812872    6044 node_conditions.go:105] duration metric: took 8.0186ms to run NodePressure ...
	I0328 01:32:22.812933    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0328 01:32:23.362349    6044 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0328 01:32:23.362349    6044 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0328 01:32:23.362349    6044 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0328 01:32:23.362349    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0328 01:32:23.362349    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.362349    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.362349    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.369364    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:23.369364    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Audit-Id: 22598923-e104-4294-8af1-8c8c63fb54cf
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.370026    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.370026    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.370026    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.371511    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1946"},"items":[{"metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1869","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0328 01:32:23.372774    6044 kubeadm.go:733] kubelet initialised
	I0328 01:32:23.372774    6044 kubeadm.go:734] duration metric: took 10.4249ms waiting for restarted kubelet to initialise ...
	I0328 01:32:23.372774    6044 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:23.373324    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:23.373377    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.373407    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.373407    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.391616    6044 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0328 01:32:23.392094    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.392094    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Audit-Id: 7659c847-0240-4180-8d5e-34ad99a7e7c6
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.392094    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.392094    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.393994    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1946"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87144 chars]
	I0328 01:32:23.398342    6044 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.399366    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:23.399366    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.399366    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.399366    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.403360    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.403360    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Audit-Id: c126a05d-c80f-4243-8d68-38114f1a4c62
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.403360    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.403360    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.403779    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.403779    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.403987    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:23.404623    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.404623    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.404699    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.404699    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.410201    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:23.410201    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.410201    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.410201    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Audit-Id: 5741f29c-842d-4c45-aa55-c9106415f8e2
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.410814    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.411080    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.411692    6044 pod_ready.go:97] node "multinode-240000" hosting pod "coredns-76f75df574-776ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.411817    6044 pod_ready.go:81] duration metric: took 13.4745ms for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.411817    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "coredns-76f75df574-776ph" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.411877    6044 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.412015    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:32:23.412015    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.412015    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.412015    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.415428    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.415428    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Audit-Id: 89857769-163a-4bf5-ba36-3d8d76ff7ca3
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.415428    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.415428    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.415428    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.415428    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1869","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0328 01:32:23.416406    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.416406    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.416406    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.416406    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.419415    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.419415    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.419415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Audit-Id: ab1e3d0f-d824-4fb9-855a-6f89de629d07
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.419415    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.419812    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.420084    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.420574    6044 pod_ready.go:97] node "multinode-240000" hosting pod "etcd-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.420632    6044 pod_ready.go:81] duration metric: took 8.7546ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.420632    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "etcd-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.420715    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.420826    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:32:23.420826    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.420826    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.420826    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.424565    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.424565    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Audit-Id: f1f4b044-1215-4c44-b46f-deee6a9cf7dc
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.424565    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.424565    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.424565    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.424565    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"8b9b4cf7-40b0-4a3e-96ca-28c934f9789a","resourceVersion":"1870","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.229.19:8443","kubernetes.io/config.hash":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.mirror":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.seen":"2024-03-28T01:32:13.677615805Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0328 01:32:23.425708    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.425708    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.425708    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.425708    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.429466    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.429466    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.429466    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Audit-Id: f20d50ef-3eb7-46d5-8007-8d3851472675
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.429466    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.429466    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.430294    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.430294    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-apiserver-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.430294    6044 pod_ready.go:81] duration metric: took 9.5785ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.430294    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-apiserver-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.430294    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.430952    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:32:23.430993    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.430993    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.431029    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.435608    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.435608    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.435608    6044 round_trippers.go:580]     Audit-Id: 220669ea-17ea-4a31-822f-e000b9198762
	I0328 01:32:23.435608    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.435967    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.435967    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.435967    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.435967    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.436509    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"1867","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0328 01:32:23.437400    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.437469    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.437469    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.437469    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.440611    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.440611    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Audit-Id: 4e3122da-e4c0-4a49-b78c-b4945d8cd2db
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.440611    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.440611    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.440611    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.441417    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.442089    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-controller-manager-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.442089    6044 pod_ready.go:81] duration metric: took 11.7947ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.442089    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-controller-manager-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.442089    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.562617    6044 request.go:629] Waited for 120.4169ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:32:23.562830    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:32:23.562926    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.562926    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.562926    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.567221    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:23.567221    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.567221    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.567293    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.567293    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.567293    6044 round_trippers.go:580]     Audit-Id: 1698ab54-abd5-401e-9b74-d35987316474
	I0328 01:32:23.567513    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"1926","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0328 01:32:23.766958    6044 request.go:629] Waited for 198.4053ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.767191    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:23.767191    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.767191    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.767191    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.771975    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.771975    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Audit-Id: daef0079-aa85-4f4a-bfa8-973a4cd67867
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.771975    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.771975    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.771975    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.771975    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:23.773542    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-proxy-47rqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.773606    6044 pod_ready.go:81] duration metric: took 331.5157ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:23.773606    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-proxy-47rqg" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:23.773606    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:23.974151    6044 request.go:629] Waited for 200.3791ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:32:23.974431    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:32:23.974687    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:23.974687    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:23.974687    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:23.978771    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:23.978771    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Audit-Id: 1b6a3f8f-3d3a-4282-ae29-6a076d976278
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:23.978771    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:23.978771    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:23.978771    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:23 GMT
	I0328 01:32:23.978771    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55rch","generateName":"kube-proxy-","namespace":"kube-system","uid":"a96f841b-3e8f-42c1-be63-03914c0b90e8","resourceVersion":"1831","creationTimestamp":"2024-03-28T01:15:58Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:15:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:32:24.164582    6044 request.go:629] Waited for 184.5948ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:32:24.164798    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:32:24.164798    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.164798    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.164798    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.169769    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.169769    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.169769    6044 round_trippers.go:580]     Audit-Id: 347d5143-d72d-4f28-b657-4a4fea1a4a3a
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.169839    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.169839    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.169839    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.170093    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m03","uid":"dbbc38c1-7871-4a48-98eb-4fd00b43bc22","resourceVersion":"1842","creationTimestamp":"2024-03-28T01:27:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_27_31_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:27:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4407 chars]
	I0328 01:32:24.170603    6044 pod_ready.go:97] node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:32:24.170660    6044 pod_ready.go:81] duration metric: took 397.0507ms for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:24.170715    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:32:24.170715    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.373285    6044 request.go:629] Waited for 202.1221ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:32:24.373285    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:32:24.373285    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.373285    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.373285    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.377942    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.378224    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Audit-Id: c94e4e5a-1e6d-4fa9-9d80-72b2f2c49cdf
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.378224    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.378224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.378224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.378754    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t88gz","generateName":"kube-proxy-","namespace":"kube-system","uid":"695603ac-ab24-4206-9728-342b2af018e4","resourceVersion":"650","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0328 01:32:24.578424    6044 request.go:629] Waited for 198.6954ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:32:24.578547    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:32:24.578547    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.578547    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.578547    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.582888    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.582888    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.582888    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.582888    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.583181    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.583181    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.583181    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.583181    6044 round_trippers.go:580]     Audit-Id: f21c33af-496a-4d86-97ab-574e1116bee1
	I0328 01:32:24.585884    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"1676","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 3834 chars]
	I0328 01:32:24.585884    6044 pod_ready.go:92] pod "kube-proxy-t88gz" in "kube-system" namespace has status "Ready":"True"
	I0328 01:32:24.585884    6044 pod_ready.go:81] duration metric: took 415.1663ms for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.585884    6044 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:24.765768    6044 request.go:629] Waited for 179.1164ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:32:24.766039    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:32:24.766039    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.766039    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.766039    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.771490    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:24.771490    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.771564    6044 round_trippers.go:580]     Audit-Id: 2c7100fb-9f35-4070-99ce-5b674459ceba
	I0328 01:32:24.771564    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.771721    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.771721    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.771721    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.771721    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.771923    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"1868","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0328 01:32:24.968690    6044 request.go:629] Waited for 195.8642ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:24.968690    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:24.968690    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:24.968690    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:24.968690    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:24.973620    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:24.973620    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:24.973620    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:24 GMT
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Audit-Id: 96942bd8-087c-42d8-ba5c-44b9fe634e1d
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:24.973620    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:24.973734    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:24.973779    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:24.974540    6044 pod_ready.go:97] node "multinode-240000" hosting pod "kube-scheduler-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:24.974611    6044 pod_ready.go:81] duration metric: took 388.7245ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	E0328 01:32:24.974611    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000" hosting pod "kube-scheduler-multinode-240000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000" has status "Ready":"False"
	I0328 01:32:24.974611    6044 pod_ready.go:38] duration metric: took 1.6018265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:24.974730    6044 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0328 01:32:24.999010    6044 command_runner.go:130] > -16
	I0328 01:32:24.999010    6044 ops.go:34] apiserver oom_adj: -16
	I0328 01:32:24.999010    6044 kubeadm.go:591] duration metric: took 13.8430667s to restartPrimaryControlPlane
	I0328 01:32:24.999010    6044 kubeadm.go:393] duration metric: took 13.9148017s to StartCluster
	I0328 01:32:24.999010    6044 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:24.999702    6044 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0328 01:32:25.001404    6044 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0328 01:32:25.003180    6044 start.go:234] Will wait 6m0s for node &{Name: IP:172.28.229.19 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0328 01:32:25.008497    6044 out.go:177] * Verifying Kubernetes components...
	I0328 01:32:25.003376    6044 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0328 01:32:25.003561    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:32:25.013871    6044 out.go:177] * Enabled addons: 
	I0328 01:32:25.014678    6044 addons.go:505] duration metric: took 11.4981ms for enable addons: enabled=[]
	I0328 01:32:25.024717    6044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0328 01:32:25.337072    6044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0328 01:32:25.367810    6044 node_ready.go:35] waiting up to 6m0s for node "multinode-240000" to be "Ready" ...
	I0328 01:32:25.368966    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:25.369033    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:25.369056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:25.369056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:25.372656    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:25.372656    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:25.372656    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:25.372656    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:25 GMT
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Audit-Id: 7990af29-714a-474b-b648-fad0541389d0
	I0328 01:32:25.372656    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:25.373441    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:25.373760    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:25.873148    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:25.873205    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:25.873205    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:25.873205    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:25.877548    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:25.877548    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:25.878124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:25 GMT
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Audit-Id: 917ae63e-5384-4274-9f85-8beb8604f997
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:25.878124    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:25.878124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:25.878524    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:26.376792    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:26.376792    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:26.376792    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:26.376792    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:26.383478    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:26.383621    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Audit-Id: 3f4d1b84-8eee-41fd-bb59-51b89354ca3f
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:26.383621    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:26.383621    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:26.383621    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:26 GMT
	I0328 01:32:26.383799    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:26.877594    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:26.877594    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:26.877594    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:26.877594    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:26.884075    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:26.884075    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Audit-Id: 7ec51b15-bb24-4b9a-8d31-24c4df0b9d6c
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:26.884451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:26.884451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:26.884451    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:26 GMT
	I0328 01:32:26.884556    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:27.379989    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:27.380062    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:27.380062    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:27.380062    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:27.383811    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:27.383834    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:27.383896    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:27.384054    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:27.384054    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:27 GMT
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Audit-Id: 14087351-d4f7-40dd-9294-41ece6e36270
	I0328 01:32:27.384054    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:27.384212    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:27.384898    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:27.871883    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:27.871883    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:27.872030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:27.872030    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:27.876137    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:27.876945    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Audit-Id: 60b97c85-27c4-4698-bfc7-f0f6c9d85811
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:27.876945    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:27.876945    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:27.876945    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:27 GMT
	I0328 01:32:27.877030    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:28.375532    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:28.375532    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:28.375532    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:28.375532    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:28.382107    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:28.382107    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:28.382107    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:28.382107    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:28 GMT
	I0328 01:32:28.382107    6044 round_trippers.go:580]     Audit-Id: 05e0d4d7-c269-47a6-89bb-bffa4d2770a9
	I0328 01:32:28.382107    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:28.878151    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:28.878151    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:28.878151    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:28.878151    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:28.881738    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:28.881738    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:28.881738    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:28.881738    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:28 GMT
	I0328 01:32:28.881738    6044 round_trippers.go:580]     Audit-Id: bba28862-d523-4da5-bbf4-048da4b0ffbe
	I0328 01:32:28.883058    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.369320    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:29.369634    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:29.369634    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:29.369812    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:29.374327    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:29.374327    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:29.374327    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:29.374327    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:29 GMT
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Audit-Id: 678b863f-3167-4e52-806b-39bd3d866bb2
	I0328 01:32:29.374327    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:29.375072    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.875100    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:29.875100    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:29.875100    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:29.875100    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:29.879745    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:29.879745    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:29.879745    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:29 GMT
	I0328 01:32:29.879745    6044 round_trippers.go:580]     Audit-Id: 9c4420ec-1bf5-4771-9b6d-6bbe10c36b2a
	I0328 01:32:29.879951    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:29.879951    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:29.879951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:29.879951    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:29.880203    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:29.881160    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:30.376975    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:30.376975    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:30.376975    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:30.376975    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:30.381566    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:30.381566    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:30 GMT
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Audit-Id: 5727d147-60f3-4b20-b046-9b4e66307512
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:30.381672    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:30.381672    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:30.381672    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:30.381919    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:30.877103    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:30.877103    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:30.877103    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:30.877103    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:30.885079    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:30.885079    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:30.885079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:30 GMT
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Audit-Id: b4cca81d-9013-4ac9-becd-44ff47d880e1
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:30.885079    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:30.885079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:30.885079    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.377228    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:31.377285    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:31.377285    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:31.377285    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:31.380759    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:31.380759    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:31.380759    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:31.380759    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:31 GMT
	I0328 01:32:31.380759    6044 round_trippers.go:580]     Audit-Id: 09ed7261-20fc-40dd-b579-56864756df7c
	I0328 01:32:31.380759    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.882024    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:31.882024    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:31.882024    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:31.882111    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:31.887412    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:31.887479    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Audit-Id: 7b8b9299-426a-45a6-8a23-0169ad3abc29
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:31.887662    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:31.887662    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:31.887662    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:31 GMT
	I0328 01:32:31.887662    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:31.888339    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:32.369274    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:32.369274    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:32.369537    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:32.369537    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:32.374876    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:32.374876    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:32.374941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:32.374941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:32 GMT
	I0328 01:32:32.374941    6044 round_trippers.go:580]     Audit-Id: d4a7a6df-b3df-4c3c-ba61-ef7aef928792
	I0328 01:32:32.375218    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1859","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5372 chars]
	I0328 01:32:32.876487    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:32.876487    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:32.876487    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:32.876487    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:32.880072    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:32.880072    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:32.880072    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:32.880072    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:32 GMT
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Audit-Id: 774a2c59-4fd8-45d8-bdb6-7a187b7991b4
	I0328 01:32:32.880072    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:32.880072    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:33.380886    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:33.380886    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:33.380886    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:33.380886    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:33.385366    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:33.385366    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:33.385366    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:33.385825    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:33.385825    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:33 GMT
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Audit-Id: aea51daa-93a9-429f-bcac-cea2d1e746fe
	I0328 01:32:33.385825    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:33.386091    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:33.868031    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:33.868031    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:33.868031    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:33.868031    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:33.871095    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:33.871589    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:33.871589    6044 round_trippers.go:580]     Audit-Id: 993e6d76-750a-466b-8755-ee2d377898d4
	I0328 01:32:33.871589    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:33.871767    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:33.871767    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:33.871888    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:33.871972    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:33 GMT
	I0328 01:32:33.872388    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:34.375377    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:34.375377    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:34.375377    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:34.375377    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:34.379894    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:34.379894    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:34.379894    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:34 GMT
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Audit-Id: c3809b19-bca1-4284-8c1f-ac9dffb986cc
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:34.379894    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:34.379894    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:34.380332    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:34.380332    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:34.879948    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:34.880090    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:34.880090    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:34.880090    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:34.884713    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:34.884922    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:34.884922    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:34 GMT
	I0328 01:32:34.884922    6044 round_trippers.go:580]     Audit-Id: e9c1b711-34ad-4e05-9cb7-dfcebc1ee3f7
	I0328 01:32:34.885005    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:34.885005    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:34.885005    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:34.885005    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:34.885145    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:35.382825    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:35.382825    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:35.382825    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:35.382825    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:35.387405    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:35.387405    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:35.387405    6044 round_trippers.go:580]     Audit-Id: d7064458-c5a7-48f4-9876-3d4121f8b348
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:35.387488    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:35.387488    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:35.387488    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:35 GMT
	I0328 01:32:35.387652    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:35.871622    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:35.871622    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:35.871622    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:35.871622    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:35.882622    6044 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:32:35.883132    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:35 GMT
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Audit-Id: b5139b39-185b-4b88-99c7-e36383c18949
	I0328 01:32:35.883132    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:35.883173    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:35.883173    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:35.883173    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:35.883629    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.372774    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:36.373040    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:36.373040    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:36.373040    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:36.377348    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:36.377348    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:36.377348    6044 round_trippers.go:580]     Audit-Id: 8a9fddb2-92a2-4603-afea-98d373e119d2
	I0328 01:32:36.377348    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:36.377584    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:36.377584    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:36.377584    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:36.377584    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:36 GMT
	I0328 01:32:36.378042    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.876519    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:36.876587    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:36.876587    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:36.876587    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:36.881370    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:36.881370    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Audit-Id: 8e697d4e-b706-4e07-b872-12a2b5b6b694
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:36.881998    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:36.881998    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:36.881998    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:36 GMT
	I0328 01:32:36.882474    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:36.883064    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:37.378313    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:37.378313    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:37.378313    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:37.378313    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:37.382162    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:37.382209    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:37.382257    6044 round_trippers.go:580]     Audit-Id: 76b9028f-f6bb-44bb-b0ce-f48f4f692c58
	I0328 01:32:37.382257    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:37.382300    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:37.382300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:37.382300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:37.382341    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:37 GMT
	I0328 01:32:37.382341    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:37.868051    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:37.868051    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:37.868051    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:37.868051    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:37.872686    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:37.872686    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:37.872686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:37.872686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:37 GMT
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Audit-Id: 1953861f-42d0-409b-89ec-3afc3e2977fa
	I0328 01:32:37.872686    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:37.873206    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.373283    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:38.373283    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:38.373283    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:38.373283    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:38.376594    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:38.376594    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:38.376594    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:38.376594    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:38 GMT
	I0328 01:32:38.376594    6044 round_trippers.go:580]     Audit-Id: 42d490b1-e665-4663-99af-640412839bc9
	I0328 01:32:38.377098    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.878200    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:38.878638    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:38.878638    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:38.878638    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:38.883329    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:38.883329    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:38 GMT
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Audit-Id: 98566b33-4d46-48a5-94ab-61953e9734ec
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:38.883329    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:38.883329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:38.883329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:38.883740    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:38.884391    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:39.383533    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:39.383671    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:39.383671    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:39.383671    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:39.387051    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:39.388046    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:39 GMT
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Audit-Id: 571fe9c1-33df-44f6-8339-c2da3ccb3632
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:39.388101    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:39.388101    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:39.388101    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:39.388402    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:39.870558    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:39.870620    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:39.870678    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:39.870678    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:39.874490    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:39.874490    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:39.874490    6044 round_trippers.go:580]     Audit-Id: fffd0e0d-0a60-48e3-9a07-9e1aea3bf9e3
	I0328 01:32:39.874490    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:39.874708    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:39.874708    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:39.874708    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:39.874708    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:39 GMT
	I0328 01:32:39.874772    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:40.371454    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:40.371522    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:40.371522    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:40.371522    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:40.376314    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:40.376878    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Audit-Id: 18e36c87-19fd-49ab-b28c-3bea3aa72554
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:40.376878    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:40.376878    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:40.376878    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:40 GMT
	I0328 01:32:40.377116    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:40.873252    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:40.873318    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:40.873318    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:40.873318    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:40.877561    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:40.877561    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Audit-Id: 0a0cdc1a-77fe-431c-bafe-0ec33478c4f7
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:40.877639    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:40.877639    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:40.877639    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:40 GMT
	I0328 01:32:40.878032    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:41.378708    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:41.378780    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:41.378780    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:41.378780    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:41.382255    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:41.383176    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Audit-Id: 0e606d26-34d0-4b0e-9cca-e05fbfe8de63
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:41.383176    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:41.383176    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:41.383176    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:41 GMT
	I0328 01:32:41.383353    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:41.383981    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:41.884237    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:41.884294    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:41.884294    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:41.884294    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:41.888886    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:41.889104    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:41.889104    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:41.889104    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:41 GMT
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Audit-Id: f12be6b1-c765-41c9-9ceb-c500995e76fa
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:41.889104    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:41.889104    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:42.382467    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:42.382467    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:42.382467    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:42.382467    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:42.385434    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:42.386424    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Audit-Id: 5ffade95-60f5-4b41-85c9-e876d8b7089c
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:42.386482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:42.386482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:42.386482    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:42 GMT
	I0328 01:32:42.386793    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:42.869430    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:42.869691    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:42.869691    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:42.869691    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:42.873148    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:42.873745    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:42.873745    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:42.873745    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:42.873745    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:42.873745    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:42 GMT
	I0328 01:32:42.873848    6044 round_trippers.go:580]     Audit-Id: d285592f-f933-4d0c-a103-14d83fe62b8c
	I0328 01:32:42.873848    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:42.874137    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.377922    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:43.378030    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:43.378030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:43.378102    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:43.382005    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:43.382124    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Audit-Id: df9b6025-5242-4520-933a-db4697a21b99
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:43.382124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:43.382124    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:43.382124    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:43 GMT
	I0328 01:32:43.382124    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.877638    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:43.877748    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:43.877748    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:43.877748    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:43.881187    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:43.882174    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:43.882174    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:43.882174    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:43 GMT
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Audit-Id: 04a2bd53-b025-429e-b1f8-242bc9f4680d
	I0328 01:32:43.882174    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:43.882803    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:43.883383    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:44.383219    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:44.383219    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:44.383219    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:44.383219    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:44.386789    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:44.387069    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Audit-Id: 10372e68-b64b-46b9-a463-eefda2b18076
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:44.387069    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:44.387069    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:44.387138    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:44.387138    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:44 GMT
	I0328 01:32:44.387363    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:44.870705    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:44.870834    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:44.870834    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:44.870894    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:44.874292    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:44.874292    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:44.874292    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:44.874292    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:44 GMT
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Audit-Id: a995bb09-6e51-4e67-bc9e-ff3d7e396912
	I0328 01:32:44.874292    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:44.874659    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:45.378679    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:45.378679    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:45.378914    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:45.378914    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:45.389443    6044 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0328 01:32:45.389443    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:45.389443    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:45.389443    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:45.389443    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:45 GMT
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Audit-Id: df2db671-e9c5-43f4-8fae-58cee154b3fe
	I0328 01:32:45.389799    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:45.390238    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:45.868672    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:45.868751    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:45.868751    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:45.868751    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:45.873475    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:45.873475    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:45 GMT
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Audit-Id: e472d716-3677-4946-8e44-1747db6d252a
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:45.873475    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:45.873475    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:45.873475    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:45.873475    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:46.373783    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:46.373860    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:46.373860    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:46.373912    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:46.378308    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:46.378308    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:46 GMT
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Audit-Id: 4d6d6f86-9b78-410c-9e62-342655933c52
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:46.378308    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:46.378308    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:46.378308    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:46.378661    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:46.379123    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:46.875338    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:46.875338    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:46.875338    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:46.875338    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:46.879919    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:46.879919    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Audit-Id: c1ce1192-bbcf-4e95-a7de-1c4e87a323df
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:46.879919    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:46.879919    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:46.879919    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:46.880089    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:46 GMT
	I0328 01:32:46.880237    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:47.379030    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:47.379030    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:47.379030    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:47.379030    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:47.383231    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:47.383319    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:47.383319    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:47 GMT
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Audit-Id: 2279d9f0-92ea-4ff3-b350-b8882bce703a
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:47.383319    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:47.383319    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:47.383694    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:47.878074    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:47.878074    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:47.878074    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:47.878365    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:47.881658    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:47.881658    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:47.881658    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:47.881658    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:47 GMT
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Audit-Id: 6395c584-704f-4004-a39f-4bc22d258ffa
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:47.882461    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:47.882461    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:47.882790    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:48.381990    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:48.381990    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:48.381990    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:48.381990    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:48.386494    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:48.386494    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Audit-Id: d6b0ec0a-334c-4b41-a38a-080f47b44eb8
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:48.386494    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:48.386494    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:48.386494    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:48 GMT
	I0328 01:32:48.387199    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:48.387730    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:48.872803    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:48.872803    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:48.872803    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:48.872803    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:48.877041    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:48.877041    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:48.877041    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:48 GMT
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Audit-Id: c5c12cd2-83af-43fd-80e0-6f0c4e9d9899
	I0328 01:32:48.877041    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:48.877261    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:48.877261    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:48.877365    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:49.375326    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:49.375386    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:49.375386    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:49.375452    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:49.383234    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:49.383234    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:49 GMT
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Audit-Id: e00020e8-99ea-467c-a342-259bfd21722f
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:49.383234    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:49.383234    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:49.383234    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:49.383590    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:49.876823    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:49.876823    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:49.876823    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:49.876823    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:49.883192    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:49.883381    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:49.883381    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:49.883381    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:49 GMT
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Audit-Id: 4ec65566-e313-4230-abf5-430325415f15
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:49.883381    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:49.884220    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.369462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:50.369462    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:50.369462    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:50.369462    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:50.373894    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:50.373894    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:50.374489    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:50.374489    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:50 GMT
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Audit-Id: d8731f61-da51-4051-90ba-561479eb7934
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:50.374489    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:50.374788    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.879092    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:50.879185    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:50.879185    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:50.879185    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:50.885712    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:50.885712    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:50.885712    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:50.885712    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:50 GMT
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Audit-Id: 266d4fe9-f340-4dac-90e1-346b7a3a500b
	I0328 01:32:50.885712    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:50.886092    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:50.886834    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:51.380606    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:51.380606    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:51.380606    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:51.380872    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:51.390533    6044 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0328 01:32:51.390533    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Audit-Id: ea0301aa-f324-4b13-b581-aa01ca97daf2
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:51.390533    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:51.390533    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:51.390533    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:51 GMT
	I0328 01:32:51.390533    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:51.868835    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:51.868835    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:51.868835    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:51.868835    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:51.874115    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:51.874199    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:51.874199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:51 GMT
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Audit-Id: a985b2be-e4a5-4a37-aea5-feec7817ef98
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:51.874199    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:51.874199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:51.874199    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:52.370755    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:52.370755    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:52.370755    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:52.370755    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:52.375478    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:52.376178    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:52.376178    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:52.376178    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:52 GMT
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Audit-Id: fad1cb7f-0425-4a93-819a-1945a6d6b3c2
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:52.376178    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:52.376527    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:52.876133    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:52.876209    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:52.876209    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:52.876209    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:52.884190    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:52.885123    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:52.885123    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:52 GMT
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Audit-Id: 944b86ee-6fe4-429a-a4b9-164efd33b768
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:52.885123    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:52.885123    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:52.886158    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:53.377575    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:53.377636    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:53.377636    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:53.377636    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:53.381680    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:53.381680    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:53.382093    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:53.382093    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:53 GMT
	I0328 01:32:53.382093    6044 round_trippers.go:580]     Audit-Id: 35749f5f-2656-424c-9e85-54e3aeca7405
	I0328 01:32:53.382363    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:53.383136    6044 node_ready.go:53] node "multinode-240000" has status "Ready":"False"
	I0328 01:32:53.881296    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:53.881423    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:53.881423    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:53.881423    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:53.892464    6044 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0328 01:32:53.892588    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:53.892588    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:53 GMT
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Audit-Id: f2158fef-8ec5-43ee-b6cd-fd7efe401602
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:53.892588    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:53.892588    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:53.892588    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"1977","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5588 chars]
	I0328 01:32:54.370866    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.370866    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.370866    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.370866    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.377336    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:54.377336    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.377452    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.377452    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Audit-Id: e8a80d99-932f-4b65-aa10-5da7a8d297e5
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.377452    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.377658    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:54.378365    6044 node_ready.go:49] node "multinode-240000" has status "Ready":"True"
	I0328 01:32:54.378483    6044 node_ready.go:38] duration metric: took 29.0104766s for node "multinode-240000" to be "Ready" ...
	I0328 01:32:54.378542    6044 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:32:54.378610    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:32:54.378685    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.378685    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.378737    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.384859    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:54.384859    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.384859    6044 round_trippers.go:580]     Audit-Id: 19f173d9-ef52-48a3-b9cc-4dffcab52055
	I0328 01:32:54.385857    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.385857    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.385880    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.385880    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.385880    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.387581    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2021"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86583 chars]
	I0328 01:32:54.391748    6044 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:32:54.391748    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:54.391748    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.391748    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.391748    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.395448    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:54.395472    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.395472    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Audit-Id: 3725b522-3420-476f-a55c-b4d7982bcc4c
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.395472    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.395472    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.396576    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:54.397141    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.397202    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.397202    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.397202    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.399419    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:54.399419    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.399419    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.399419    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.399419    6044 round_trippers.go:580]     Audit-Id: edd68f83-b434-48be-af9a-ecd6bf60b240
	I0328 01:32:54.400659    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:54.906747    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:54.906911    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.906974    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.906974    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.911719    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:54.912142    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Audit-Id: 1e7095a2-9b3b-4774-84ce-e770efebd411
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.912142    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.912142    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.912224    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.912224    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.912450    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:54.913169    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:54.913245    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:54.913245    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:54.913245    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:54.917492    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:54.917492    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:54.917692    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:54 GMT
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Audit-Id: ce1493d1-2898-48ae-be9a-69ddca902283
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:54.917692    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:54.917692    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:54.917982    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:55.403614    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:55.403614    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.403614    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.403614    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.408143    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:55.408377    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Audit-Id: ab4ce259-c92b-4b83-afcf-b210dfc6a8f0
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.408377    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.408377    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.408377    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.409051    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:55.409276    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:55.409276    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.409807    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.409807    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.412527    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:55.413523    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.413523    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.413523    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Audit-Id: 059bdf85-ec7f-4459-a750-3cb99cefc952
	I0328 01:32:55.413523    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.414274    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:55.904672    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:55.904672    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.904672    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.904672    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.912964    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:32:55.912964    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Audit-Id: 6c6b60a2-e463-4789-be2a-feba8b1868db
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.912964    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.912964    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.912964    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.914035    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:55.914735    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:55.914735    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:55.914735    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:55.914735    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:55.918357    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:55.918357    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Audit-Id: 879bab20-7722-46cf-af8c-ce75fd3cb367
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:55.918357    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:55.918357    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:55.918357    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:55 GMT
	I0328 01:32:55.918357    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:56.406797    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:56.407056    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.407056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.407056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.411203    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:56.411714    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.411714    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.411714    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.411714    6044 round_trippers.go:580]     Audit-Id: d5f92fa5-70f5-4a26-881c-10fdf512c27d
	I0328 01:32:56.411797    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.411995    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:56.413361    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:56.413433    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.413433    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.413433    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.416684    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.416684    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Audit-Id: 2f11617b-d64e-457b-8ffe-8d453e97c402
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.416913    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.416913    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.416913    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.417108    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:56.417948    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:32:56.898983    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:56.898983    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.898983    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.898983    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.902689    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.902689    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.902689    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.902689    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.902689    6044 round_trippers.go:580]     Audit-Id: 41d6fdc1-d5e3-4b76-8b9e-a50b670e6123
	I0328 01:32:56.903822    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:56.904646    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:56.904706    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:56.904706    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:56.904763    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:56.908598    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:56.908598    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:56.908598    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:56.908598    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:56 GMT
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Audit-Id: c4961037-0245-45ed-a04b-9fac0c93a93c
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:56.908598    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:56.909677    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:57.399559    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:57.399559    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.399559    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.399559    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.404352    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:57.404352    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.404352    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Audit-Id: cf7c1809-fc3d-47db-8ce2-5272a2a7c5ce
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.404520    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.404520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.405181    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:57.405900    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:57.405900    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.405981    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.405981    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.409899    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:57.410016    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.410016    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.410016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.410016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.410016    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.410084    6044 round_trippers.go:580]     Audit-Id: b03a9b6d-bd67-4b17-8efd-9b3b455a1572
	I0328 01:32:57.410084    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.410670    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2021","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5365 chars]
	I0328 01:32:57.897474    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:57.897474    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.897474    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.897474    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.903263    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:32:57.903263    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.903538    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Audit-Id: 748225c4-3c08-4936-9e12-175c065f3d2e
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.903538    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.903538    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.903538    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:57.904640    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:57.904640    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:57.904640    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:57.904640    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:57.908150    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:57.908150    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:57.908150    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:57.908150    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:57 GMT
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Audit-Id: 2c3ec668-b3e8-4c8b-9c9c-5b3a1735b5d1
	I0328 01:32:57.908150    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:57.908706    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.396873    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:58.396873    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.396873    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.397004    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.403321    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:58.403321    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.403868    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.403868    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.403868    6044 round_trippers.go:580]     Audit-Id: 0a72cd07-ff3e-4f68-bc87-9c7335ffa3e2
	I0328 01:32:58.404564    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:58.404790    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:58.404790    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.404790    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.404790    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.412933    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:58.412990    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.412990    6044 round_trippers.go:580]     Audit-Id: 1212ce18-dfad-4550-8df5-35ae43af75e6
	I0328 01:32:58.413056    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.413056    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.413056    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.413113    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.413113    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.413610    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.899376    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:58.899470    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.899470    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.899470    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.907264    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:32:58.907986    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.907986    6044 round_trippers.go:580]     Audit-Id: 9f3c64c9-ac43-46f0-8649-4c89cd65f0f4
	I0328 01:32:58.907986    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.908030    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.908030    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.908030    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.908030    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.908329    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:58.909219    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:58.909219    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:58.909219    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:58.909219    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:58.912323    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:32:58.912323    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Audit-Id: 8f88c7ab-2f34-48d1-820e-f358ede78d3c
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:58.912323    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:58.912323    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:58.912323    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:58 GMT
	I0328 01:32:58.913352    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:58.913352    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:32:59.399712    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:59.399712    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.399712    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.399712    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.408239    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:32:59.408239    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Audit-Id: 5736abbd-1de1-4609-86b4-09975187adcd
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.408239    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.408239    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.408239    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.408985    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:59.409698    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:59.409698    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.409698    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.409698    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.412880    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:32:59.413075    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Audit-Id: 88e5acea-62da-4386-9b02-a84e57383345
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.413075    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.413075    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.413075    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.413337    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:32:59.899875    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:32:59.899875    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.899962    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.899962    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.904286    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:32:59.904355    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Audit-Id: fcf91b63-1704-4e2b-b051-8e407f3f7bbd
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.904355    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.904355    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.904355    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.904714    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:32:59.905638    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:32:59.905638    6044 round_trippers.go:469] Request Headers:
	I0328 01:32:59.905732    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:32:59.905732    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:32:59.912934    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:32:59.912934    6044 round_trippers.go:577] Response Headers:
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Audit-Id: ed41d076-b6ea-43b2-a77c-993328bbda10
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:32:59.912934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:32:59.912934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:32:59.912934    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:32:59 GMT
	I0328 01:32:59.913329    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:00.398140    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:00.398140    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.398140    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.398140    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.404404    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:00.404404    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Audit-Id: a002eafa-4774-481e-9965-040115cbf507
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.404404    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.404404    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.404404    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.404716    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:00.405571    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:00.405629    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.405629    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.405629    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.408500    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:00.408500    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.408500    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Audit-Id: e2474345-9ff7-46d6-845b-5362b91064f4
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.408500    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.408500    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.409322    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:00.893960    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:00.893960    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.893960    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.893960    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.896670    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:00.896670    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Audit-Id: 8bc270aa-cd40-4ba2-b444-70c542cdeccc
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.896670    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.896670    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.896670    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.897878    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:00.898163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:00.898690    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:00.898690    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:00.898690    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:00.905016    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:00.905016    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:00.905016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:00.905016    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:00 GMT
	I0328 01:33:00.905016    6044 round_trippers.go:580]     Audit-Id: 51621dec-19f7-4d1a-9bad-a4e49b91faef
	I0328 01:33:00.905016    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:01.406720    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:01.406771    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.406771    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.406771    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.411428    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:01.411523    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.411617    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.411617    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.411617    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.411617    6044 round_trippers.go:580]     Audit-Id: 9fe92bca-5113-47aa-811f-96768e8454d0
	I0328 01:33:01.411673    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.411673    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.411892    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:01.412538    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:01.412625    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.412625    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.412625    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.417300    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:01.417300    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.417300    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.417300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.417300    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.417300    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.417365    6044 round_trippers.go:580]     Audit-Id: 97ce8b72-3122-4bfc-8f34-fd1e2260a5fb
	I0328 01:33:01.417365    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.417874    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:01.418278    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:01.892746    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:01.893056    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.893056    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.893056    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.899167    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:01.899239    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.899239    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.899302    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.899302    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.899323    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.899349    6044 round_trippers.go:580]     Audit-Id: e31782a6-5131-433f-bbab-bf66f5691ca8
	I0328 01:33:01.899349    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.900636    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:01.901405    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:01.901405    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:01.901405    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:01.901405    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:01.904309    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:01.904309    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:01.904309    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:01.904309    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:01 GMT
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Audit-Id: b59c095a-a7cd-406b-831a-12ca1bb45105
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:01.904309    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:01.904309    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:02.396498    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:02.396498    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.396498    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.396498    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.405035    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:02.405554    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.405554    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.405554    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Audit-Id: f93fe61b-90bb-4702-8c88-88562368583b
	I0328 01:33:02.405554    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.405816    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:02.406537    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:02.406634    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.406634    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.406634    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.408926    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:02.408926    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.408926    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.408926    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.408926    6044 round_trippers.go:580]     Audit-Id: 00c67b79-3f4b-4576-8abe-3ef5f468e504
	I0328 01:33:02.409972    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.409972    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.410001    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.410179    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:02.897825    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:02.897825    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.897825    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.897825    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.902940    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:02.902940    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Audit-Id: 26fe1be7-6bf7-47c9-86fe-e84520b8f6d6
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.902940    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.902940    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.902940    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.902940    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:02.903994    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:02.904077    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:02.904077    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:02.904077    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:02.906553    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:02.906553    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:02.907458    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:02 GMT
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Audit-Id: e50af020-fadd-4a9f-a213-740fe249642d
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:02.907458    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:02.907458    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:02.907778    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.400938    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:03.400938    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.401076    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.401076    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.405050    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:03.405484    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.405484    6044 round_trippers.go:580]     Audit-Id: 8686030c-7e7a-4471-ab75-64385c8f9b00
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.405557    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.405557    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.405557    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.405925    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:03.406655    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:03.406655    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.406655    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.406655    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.409080    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:03.409080    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.410021    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Audit-Id: 8aab82b8-1f18-412a-9775-bad27f6ea0c0
	I0328 01:33:03.410061    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.410102    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.410102    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.410166    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.900777    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:03.900890    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.900890    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.900890    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.907185    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:03.907185    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Audit-Id: 7b356ce8-a0f4-4104-a748-f7538e174307
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.907185    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.907185    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.907185    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.907724    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:03.908063    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:03.908593    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:03.908593    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:03.908593    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:03.911910    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:03.911910    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:03.911910    6044 round_trippers.go:580]     Audit-Id: feb794f2-52ce-4747-beec-4c78cf33d607
	I0328 01:33:03.912164    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:03.912164    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:03.912199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:03.912199    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:03.912199    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:03 GMT
	I0328 01:33:03.912594    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:03.913101    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:04.400234    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:04.400500    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.400500    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.400500    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.404794    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:04.404794    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.404794    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Audit-Id: 1c1bcc28-b189-4ac0-8143-c89be2a65a82
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.404794    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.405451    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.405719    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:04.406970    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:04.406970    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.407069    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.407069    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.410321    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:04.410321    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.410321    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.410321    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Audit-Id: 31d9d99f-a0c7-4770-b695-0d3f1bae2718
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.410568    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.410568    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.411057    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:04.905462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:04.905549    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.905549    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.905549    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.910298    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:04.910440    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Audit-Id: 00c89c4f-a678-4086-a51c-030ed1d62a3f
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.910440    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.910525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.910525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.910525    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.910525    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:04.911576    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:04.911576    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:04.911576    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:04.911576    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:04.916761    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:04.916761    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:04.916761    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:04.916761    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:04 GMT
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Audit-Id: 7bf6e77b-66b3-41d7-ade2-7c62ba084289
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:04.916761    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:04.917521    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:05.394936    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:05.395226    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.395226    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.395226    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.402804    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:05.402804    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.402804    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.402804    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Audit-Id: 3628962c-5e78-4933-a6ea-28deddcada1b
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.402804    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.402804    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:05.404168    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:05.404198    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.404198    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.404246    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.407482    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:05.407482    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.407482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.407482    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.407482    6044 round_trippers.go:580]     Audit-Id: f7737af8-af4a-44f1-8d71-8216c121aa27
	I0328 01:33:05.408709    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:05.894378    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:05.894378    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.894378    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.894378    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.899311    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:05.899311    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.899311    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.899311    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.899311    6044 round_trippers.go:580]     Audit-Id: bbe443c9-464b-48cc-9830-c308933e119c
	I0328 01:33:05.899311    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:05.900357    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:05.900357    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:05.900357    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:05.900357    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:05.906603    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:05.906603    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Audit-Id: 33f40ecd-6544-404f-8ef6-bd867ff9aa1b
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:05.906603    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:05.906603    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:05.906603    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:05 GMT
	I0328 01:33:05.908462    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:06.393392    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:06.393578    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.393578    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.393578    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.398134    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:06.398426    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Audit-Id: d6bd93ac-a1f3-4184-b3d3-139445514e8b
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.398426    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.398426    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.398426    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.399075    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:06.399860    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:06.399931    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.399931    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.399931    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.405548    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:06.405548    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Audit-Id: 3de03604-9488-4bba-b335-568d07700fc6
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.405548    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.405548    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.405548    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.405548    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:06.406308    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:06.897161    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:06.897240    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.897240    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.897240    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.901664    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:06.902004    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.902004    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.902004    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.902004    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.902004    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.902075    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.902075    6044 round_trippers.go:580]     Audit-Id: 09e2bfe4-7193-441f-a7d4-142f6ef5f67d
	I0328 01:33:06.902374    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:06.903175    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:06.903230    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:06.903230    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:06.903230    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:06.906790    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:06.906993    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:06.906993    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:06.906993    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:06.906993    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:06 GMT
	I0328 01:33:06.907116    6044 round_trippers.go:580]     Audit-Id: 72d28c5d-1b6b-4059-bbdf-8efe65038be0
	I0328 01:33:06.907230    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:07.394758    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:07.394758    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.394758    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.394841    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.402844    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:07.402844    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.402844    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.402844    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.402844    6044 round_trippers.go:580]     Audit-Id: dab4dcd8-b31a-46f3-bb65-03661761549c
	I0328 01:33:07.402844    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:07.403662    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:07.403662    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.403662    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.403662    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.407525    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:07.407525    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Audit-Id: 5f51400a-0468-4e96-9500-bccbe980ec0d
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.407525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.407525    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.407525    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.407525    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:07.907184    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:07.907184    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.907184    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.907184    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.911624    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:07.911624    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.911624    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.911624    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.911624    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Audit-Id: b9fce0ca-35b0-4919-afd8-ffa1781f256f
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.912474    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.912689    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:07.913490    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:07.913490    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:07.913490    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:07.913490    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:07.916534    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:07.916534    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:07.916534    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:07.916534    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:07.916534    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:07 GMT
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Audit-Id: 526986ac-5300-411e-9287-5b02366af36c
	I0328 01:33:07.916593    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:07.916915    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:08.406447    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:08.406447    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.406447    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.406447    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.410044    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:08.410044    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Audit-Id: b8b4b0c5-1e81-4255-a450-51557f34af7b
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.410044    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.410044    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.410044    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.411015    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.411015    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:08.412020    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:08.412020    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.412020    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.412101    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.414842    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:08.415686    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Audit-Id: 825aa512-2d91-44d7-819e-f5f725b4b3fa
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.415686    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.415686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.415686    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.415686    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:08.416398    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:08.905388    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:08.905388    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.905388    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.905388    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.910638    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:08.910638    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.910638    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.910716    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.910716    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Audit-Id: 0448ff0f-5b7f-453a-8d43-2d0a99f9c9a5
	I0328 01:33:08.910716    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.910988    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:08.911579    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:08.911579    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:08.911579    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:08.911579    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:08.915167    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:08.915520    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:08.915520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:08.915520    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:08 GMT
	I0328 01:33:08.915520    6044 round_trippers.go:580]     Audit-Id: 065889ff-c8f1-4fea-bed6-0b197eaf1adf
	I0328 01:33:08.915584    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:08.916162    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:09.404122    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:09.404122    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.404122    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.404244    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.411432    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:09.411432    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.411432    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Audit-Id: beba5670-36b3-4f4c-88a9-cb37450f7fde
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.411432    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.411432    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.411432    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:09.412197    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:09.412197    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.412197    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.412197    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.416213    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:09.416213    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Audit-Id: a4293e46-855d-4402-b0ce-a079f60dadac
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.416213    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.416329    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.416405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.416405    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.416529    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:09.901462    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:09.901462    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.901462    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.901462    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.906109    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:09.906109    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Audit-Id: 2b7b0d75-763f-4145-8144-f51c0108e6d3
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.906109    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.906289    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.906289    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.906667    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:09.907366    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:09.907366    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:09.907366    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:09.907366    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:09.910515    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:09.910762    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:09.910762    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:09.910762    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:09 GMT
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Audit-Id: dca9629d-51c4-4aac-b69b-b21d22c4b13b
	I0328 01:33:09.910762    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:09.910968    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.402273    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:10.402273    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.402273    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.402273    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.407166    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:10.407233    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Audit-Id: 6e8a9639-cf75-4ac9-a2dd-1627b49fcb23
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.407233    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.407233    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.407326    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.407865    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:10.408780    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:10.408914    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.408914    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.408914    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.413106    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:10.413872    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.413941    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.413941    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.413941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.413941    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.414006    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.414006    6044 round_trippers.go:580]     Audit-Id: 140b7fe9-f27a-44f5-94dd-7b4bd588b7f1
	I0328 01:33:10.414193    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.900936    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:10.901042    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.901042    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.901042    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.904400    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:10.905417    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.905417    6044 round_trippers.go:580]     Audit-Id: 1d4ad733-fabb-42ac-aff2-4991466c2a27
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.905457    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.905457    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.905457    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.905595    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:10.906177    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:10.906177    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:10.906177    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:10.906332    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:10.909462    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:10.909637    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:10.909637    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:10.909687    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:10 GMT
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Audit-Id: eed2ce8f-e4b0-4620-a161-def523ecc219
	I0328 01:33:10.909687    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:10.909734    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:10.909786    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:10.910316    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:11.400884    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:11.401017    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.401017    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.401017    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.404869    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.404869    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.404869    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Audit-Id: 0aeededd-4810-44a1-a3c2-c76b431c4c25
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.404869    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.404869    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.405880    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:11.406616    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:11.406616    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.406703    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.406703    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.413693    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:11.413693    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.413693    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.413693    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Audit-Id: 58ef7b12-f119-4722-b747-8a16363daa76
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.413693    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.414444    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:11.902841    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:11.902841    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.902841    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.902841    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.906923    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.906923    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Audit-Id: b5a2d79d-b633-495b-90f0-845476c889e0
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.906923    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.906923    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.906923    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.907695    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:11.908306    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:11.908306    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:11.908306    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:11.908306    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:11.911341    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:11.911341    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:11.911341    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:11 GMT
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Audit-Id: babc38e8-2c37-45c8-9b07-4358e99bddfc
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:11.911524    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:11.911524    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:11.911524    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.402309    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:12.402309    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.402309    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.402585    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.408245    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:12.408245    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Audit-Id: b63bf871-414a-4391-a6c9-281cb4fbecec
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.408245    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.408245    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.408245    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.408462    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:12.409121    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:12.409121    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.409121    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.409281    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.412590    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:12.413273    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Audit-Id: ded0763b-5919-4e5e-9d9a-0bdb07a4d799
	I0328 01:33:12.413273    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.413348    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.413348    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.413348    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.413723    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.903090    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:12.903090    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.903090    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.903090    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.907498    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:12.907567    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.907567    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.907567    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.907567    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.907567    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.907673    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.907673    6044 round_trippers.go:580]     Audit-Id: 2ff537f2-9526-4276-b194-329111e0f0d0
	I0328 01:33:12.907868    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:12.908747    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:12.908747    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:12.908801    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:12.908801    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:12.911681    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:12.911775    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:12.911775    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:12.911873    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:12 GMT
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Audit-Id: f5e481d2-6523-4a24-8874-059452d457b6
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:12.911873    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:12.911873    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:12.912427    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:13.406888    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:13.407011    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.407011    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.407011    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.411816    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:13.411816    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Audit-Id: cbcbd04d-51af-4b34-8fec-c404b9f30fd4
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.411816    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.411816    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.412429    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.412646    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:13.413415    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:13.413415    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.413415    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.413415    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.416828    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:13.417159    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.417159    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Audit-Id: 818f870f-6a23-4ba4-a6b7-ebe91c798e4d
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.417159    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.417159    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.417570    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:13.893748    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:13.893927    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.894021    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.894021    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.898411    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:13.898609    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Audit-Id: 851bb9f8-0217-4f89-baed-2375d6be7f1e
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.898681    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.898681    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.898681    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.898849    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:13.899417    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:13.899417    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:13.899417    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:13.899417    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:13.903407    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:13.903834    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:13.903834    6044 round_trippers.go:580]     Audit-Id: 84e4f908-fb1d-49dd-8aa1-2b2d0694169c
	I0328 01:33:13.903834    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:13.903929    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:13.903929    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:13.903929    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:13.903929    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:13 GMT
	I0328 01:33:13.903929    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:14.400857    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:14.400857    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.400857    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.400857    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.406335    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:14.406335    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.406335    6044 round_trippers.go:580]     Audit-Id: f91041b4-b7df-46c7-b2ce-21f57e4f686a
	I0328 01:33:14.406417    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.406437    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.406437    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.406437    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.406437    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.406744    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:14.407522    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:14.407522    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.407522    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.407522    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.413393    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:14.413393    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Audit-Id: 01fc3e27-b96a-4d67-aebf-f84e888032e9
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.413393    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.413393    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.413393    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.413939    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:14.902432    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:14.902432    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.902432    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.902432    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.906322    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:14.906322    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Audit-Id: c22c3fea-044d-4505-b5fa-2e989436c0ca
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.906322    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.906322    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.906322    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.906322    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:14.907299    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:14.907358    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:14.907358    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:14.907358    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:14.910387    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:14.910387    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:14.910387    6044 round_trippers.go:580]     Audit-Id: 5e175c59-36d0-4a78-8a41-8965adf2fd65
	I0328 01:33:14.910387    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:14.910463    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:14.910463    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:14.910463    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:14.910463    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:14 GMT
	I0328 01:33:14.910641    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:15.401155    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:15.401233    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.401233    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.401233    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.405622    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:15.405645    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.405645    6044 round_trippers.go:580]     Audit-Id: 81ebdc8a-4c00-4f06-9298-3ec246091ca3
	I0328 01:33:15.405645    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.405713    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.405713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.405713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.405713    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.408206    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:15.408885    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:15.408885    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.408885    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.408885    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.413713    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:15.413713    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.413713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Audit-Id: 2f6e2aba-66bf-4043-9f07-b5ec38e3a574
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.413713    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.413713    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.413713    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:15.413713    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:15.897975    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:15.897975    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.897975    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.897975    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.901546    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:15.901546    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Audit-Id: bfbdb2fc-8fff-42bf-866c-5b1447aeef3d
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.901546    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.901546    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.901546    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.903269    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:15.904312    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:15.904391    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:15.904391    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:15.904391    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:15.907292    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:15.907292    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:15 GMT
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Audit-Id: fb39b8c1-94a6-490c-8a6f-dfe0f15fdbb2
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:15.907292    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:15.907826    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:15.907826    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:15.907980    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:16.395943    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:16.396218    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.396218    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.396218    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.402280    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:16.402280    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.402280    6044 round_trippers.go:580]     Audit-Id: e14f1b15-a98d-483f-b0f7-bf16f2ef0c7b
	I0328 01:33:16.402375    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.402375    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.402375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.402375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.402506    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.402795    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:16.403763    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:16.403763    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.403763    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.403763    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.406904    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:16.407997    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Audit-Id: 59bb2cf9-34ed-441b-ac48-6874b79ee56c
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.407997    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.407997    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.408053    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.408053    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.408053    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:16.894960    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:16.894960    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.895224    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.895224    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.900541    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:16.900541    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Audit-Id: c66df5fa-e5fa-4a90-815c-5bf3ca6a9193
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.900630    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.900630    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.900630    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.901187    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:16.901504    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:16.901504    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:16.901504    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:16.901504    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:16.905080    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:16.905241    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Audit-Id: 9578e60d-3511-47df-aa4a-d349f9c6e6ae
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:16.905241    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:16.905241    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:16.905241    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:16 GMT
	I0328 01:33:16.905511    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.399827    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:17.399827    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.399827    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.399827    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.407576    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:17.407576    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Audit-Id: f26e8de5-d5c3-4767-8bb7-a54885777109
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.407576    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.407576    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.407576    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.407576    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:17.408546    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:17.408546    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.408546    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.408546    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.411301    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:17.411301    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.411301    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.411301    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.411301    6044 round_trippers.go:580]     Audit-Id: 30f0b1aa-bbba-47c2-bbf3-690d649f4bc0
	I0328 01:33:17.411301    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.901107    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:17.901107    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.901107    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.901107    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.906092    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:17.906290    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.906290    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.906290    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.906290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.906373    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.906373    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.906373    6044 round_trippers.go:580]     Audit-Id: 5196dee3-705b-4ac9-a1ec-8e5be88ae743
	I0328 01:33:17.906572    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:17.907433    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:17.907501    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:17.907501    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:17.907501    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:17.914444    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:17.914444    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Audit-Id: 58a5a43c-84a9-4925-a606-c4536f5d3546
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:17.914444    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:17.914444    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:17.914444    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:17 GMT
	I0328 01:33:17.914444    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:17.915370    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:18.402917    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:18.402917    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.402917    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.402917    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.407927    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:18.407927    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Audit-Id: e8e8cec9-4405-412b-ae2e-8591588baca6
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.408022    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.408022    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.408022    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.408668    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:18.409411    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:18.409550    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.409550    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.409550    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.412808    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:18.413559    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.413559    6044 round_trippers.go:580]     Audit-Id: ac4cc8f8-0a5f-4d22-bb27-b9aea5861fd6
	I0328 01:33:18.413654    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.413654    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.413683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.413683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.413683    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.414482    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:18.905651    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:18.905651    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.905651    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.905651    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.909988    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:18.910574    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.910574    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.910574    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.910574    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.910574    6044 round_trippers.go:580]     Audit-Id: c6ba6769-7e4c-4ca6-8b95-1565d8e682a7
	I0328 01:33:18.910651    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.910651    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.911083    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:18.911473    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:18.911473    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:18.911473    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:18.911473    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:18.917722    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:18.917722    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Audit-Id: b1185063-2fdf-4360-89a9-b28a5a464a6a
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:18.917722    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:18.917722    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:18.917722    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:18 GMT
	I0328 01:33:18.917722    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:19.404803    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:19.404803    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.404803    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.404803    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.409290    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:19.409290    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.409290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Audit-Id: 05136cd0-871d-47a8-bece-44a7c2d54057
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.409290    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.409290    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.410568    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:19.411203    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:19.411203    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.411203    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.411203    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.414352    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:19.414352    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.414914    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.414914    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Audit-Id: 4c67372e-2985-4ee7-bce4-ef9ecdf18ed6
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.414914    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.418276    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:19.899518    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:19.899518    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.899518    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.899518    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.904120    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:19.904120    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.904120    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.904120    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.904120    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Audit-Id: a1d3cb39-2d93-40ea-9f75-5e97c532f9a4
	I0328 01:33:19.904630    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.904960    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:19.905188    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:19.905188    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:19.905188    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:19.905188    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:19.908916    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:19.908916    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:19.909063    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:19.909063    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:19 GMT
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Audit-Id: c6081e4a-c066-40e4-b2e5-6dedf37b322b
	I0328 01:33:19.909063    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:19.909310    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:20.397745    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:20.397745    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.397745    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.397745    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.402460    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:20.402460    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Audit-Id: 9d63d476-4603-40a1-b6cf-ac7e3ab521b6
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.402460    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.402460    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.402460    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.402958    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:20.403695    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:20.403695    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.403768    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.403768    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.407707    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:20.407707    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Audit-Id: 6c0d7ef7-701a-484e-bf66-ee1122400092
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.407707    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.407707    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.407707    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.408442    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:20.408442    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:20.894156    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:20.894208    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.894249    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.894249    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.899615    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:20.899615    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Audit-Id: c9ea700e-6641-417f-9de5-079ee99cacad
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.899615    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.899615    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.899615    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.899873    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:20.900575    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:20.900575    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:20.900630    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:20.900630    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:20.902864    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:20.902864    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Audit-Id: 470b4198-4f70-46bd-ade5-8c242c0f24b4
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:20.903854    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:20.903854    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:20.903854    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:20 GMT
	I0328 01:33:20.903854    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:21.392759    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:21.392876    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.392876    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.392876    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.398006    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:21.398079    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Audit-Id: c8ebdaaf-c590-4048-ae7b-d0ed8b58b9d5
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.398079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.398079    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.398079    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.398306    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:21.399252    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:21.399322    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.399322    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.399322    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.404540    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:21.404884    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Audit-Id: bf9f4e0b-1af1-4172-b6f2-ce6bc2ea962c
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.404884    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.404884    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.404884    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.405286    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:21.905299    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:21.905299    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.905299    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.905299    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.910037    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:21.910936    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Audit-Id: 2a1d2e6c-530d-41fb-9cb0-98890abcd2ea
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.910936    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.910936    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.910936    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.911431    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:21.912165    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:21.912165    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:21.912165    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:21.912165    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:21.915552    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:21.915552    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:21 GMT
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Audit-Id: 9f4530bb-3c9b-4280-a9c2-07399eb81622
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:21.915889    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:21.915889    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:21.915889    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:21.916186    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:22.403352    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:22.403352    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.403544    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.403544    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.407852    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:22.408415    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.408415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.408415    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.408415    6044 round_trippers.go:580]     Audit-Id: 606f40f5-7f9b-4288-b822-4d47454db001
	I0328 01:33:22.408604    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:22.409433    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:22.409433    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.409433    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.409433    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.415817    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:22.415817    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.415817    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.415817    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Audit-Id: 01e67ea8-6e97-4707-bd0b-476f219825e3
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.415817    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.415817    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:22.416615    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:22.903274    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:22.903529    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.903529    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.903529    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.907957    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:22.907957    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.907957    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.907957    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.907957    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.907957    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.908620    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.908620    6044 round_trippers.go:580]     Audit-Id: bb826da2-ddc8-4349-8c9f-c4fb52a53976
	I0328 01:33:22.908922    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:22.910097    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:22.910097    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:22.910097    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:22.910097    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:22.912493    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:22.913485    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:22 GMT
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Audit-Id: 94b4cedf-100f-4aa1-aaac-f83031d5f39e
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:22.913485    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:22.913485    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:22.913485    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:22.913766    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:23.404480    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:23.404480    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.404480    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.404480    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.409204    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:23.409204    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.409276    6044 round_trippers.go:580]     Audit-Id: 6276168e-37bf-492d-b48a-a9f66a3f87a6
	I0328 01:33:23.409276    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.409310    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.409310    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.409310    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.409310    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.409404    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:23.410204    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:23.410204    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.410204    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.410204    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.413003    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:23.413003    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.413003    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.413003    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.413003    6044 round_trippers.go:580]     Audit-Id: 649876e3-fcc5-4458-b4d8-c338999393e1
	I0328 01:33:23.413882    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:23.906653    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:23.906653    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.906653    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.906653    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.910896    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:23.910896    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Audit-Id: 018033d5-367b-4a13-a0da-f13d72f9fcef
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.910896    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.910896    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.910896    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.911490    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:23.912553    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:23.913137    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:23.913137    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:23.913206    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:23.921862    6044 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0328 01:33:23.921862    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:23.921862    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:23 GMT
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Audit-Id: 43cef98e-fc69-41a5-950f-6c2a290b1f05
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:23.921862    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:23.921862    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:23.922403    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.397812    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:24.397812    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.397812    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.397812    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.402109    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:24.402109    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.402109    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.402109    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Audit-Id: a5e41747-1a4c-4ba8-9286-77e10147e999
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.402109    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.402295    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:24.403163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:24.403163    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.403163    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.403163    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.410097    6044 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0328 01:33:24.410661    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.410661    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.410661    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Audit-Id: b6255abf-63dc-4217-abcc-5ee0715dbc95
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.410729    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.410829    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.902191    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:24.902191    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.902357    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.902357    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.907558    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:24.907558    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.907558    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.907558    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Audit-Id: deb81a15-3d88-4342-ba4f-e2e1ac4c1806
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.907683    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.907683    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.907889    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:24.908676    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:24.908676    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:24.908676    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:24.908676    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:24.913405    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:24.913405    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Audit-Id: d188cdbf-7fe0-4567-acf0-37d815bbd882
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:24.913405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:24.913405    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:24.913405    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:24 GMT
	I0328 01:33:24.913949    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:24.914066    6044 pod_ready.go:102] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"False"
	I0328 01:33:25.403099    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:25.403343    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.403343    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.403343    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.408053    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.408053    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.408053    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.408053    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.408145    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.408166    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.408166    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.408166    6044 round_trippers.go:580]     Audit-Id: 255f01ef-c25b-49a7-abdb-fa33cbfcf5ca
	I0328 01:33:25.408322    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"1866","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0328 01:33:25.409120    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.409120    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.409120    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.409120    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.412953    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.413086    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.413086    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.413086    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Audit-Id: 1994d08b-72b6-43d6-856a-7a355a2b49c4
	I0328 01:33:25.413086    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.413177    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.413467    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.905013    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-776ph
	I0328 01:33:25.905013    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.905236    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.905236    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.909050    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.909050    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Audit-Id: 5deb2128-2210-487a-b92f-aa7c2cdece70
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.909050    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.909050    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.909050    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.910341    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0328 01:33:25.910711    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.910711    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.910711    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.910711    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.916312    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:25.916312    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Audit-Id: 2d0d6149-375c-4f70-bb45-ffa30adfe893
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.916384    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.916410    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.916410    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.916410    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.916410    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.916997    6044 pod_ready.go:92] pod "coredns-76f75df574-776ph" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.916997    6044 pod_ready.go:81] duration metric: took 31.5250368s for pod "coredns-76f75df574-776ph" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.916997    6044 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.917162    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-240000
	I0328 01:33:25.917162    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.917162    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.917162    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.920588    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.920966    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.920966    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.920966    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Audit-Id: 59e230f4-b079-450c-bdec-30104df7caac
	I0328 01:33:25.920966    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.920966    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-240000","namespace":"kube-system","uid":"0a33e012-ebfe-4ac4-bf0b-ffccdd7308de","resourceVersion":"1963","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.229.19:2379","kubernetes.io/config.hash":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.mirror":"9f48c65a58defdbb87996760bf93b230","kubernetes.io/config.seen":"2024-03-28T01:32:13.690653938Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0328 01:33:25.921756    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.921756    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.921756    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.921756    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.924080    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:25.924080    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.924080    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.924080    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Audit-Id: 89b917da-6ab9-41dd-b17d-f464b23dec36
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.924080    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.925230    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.925442    6044 pod_ready.go:92] pod "etcd-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.925442    6044 pod_ready.go:81] duration metric: took 8.4443ms for pod "etcd-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.925442    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.925442    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-240000
	I0328 01:33:25.925442    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.925442    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.925442    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.928789    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.928789    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Audit-Id: fe7cf0ab-f8de-4b1f-b8e6-d3d60812f570
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.928789    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.928789    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.928789    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.928789    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-240000","namespace":"kube-system","uid":"8b9b4cf7-40b0-4a3e-96ca-28c934f9789a","resourceVersion":"1984","creationTimestamp":"2024-03-28T01:32:19Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.229.19:8443","kubernetes.io/config.hash":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.mirror":"ada1864a97137760b3789cc738948aa2","kubernetes.io/config.seen":"2024-03-28T01:32:13.677615805Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:32:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0328 01:33:25.928789    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.928789    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.928789    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.928789    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.931977    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.931977    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.931977    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.931977    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Audit-Id: 99e2ed29-b1ea-436e-8744-0217d01b6d3c
	I0328 01:33:25.931977    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.932851    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.932851    6044 pod_ready.go:92] pod "kube-apiserver-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.932851    6044 pod_ready.go:81] duration metric: took 7.409ms for pod "kube-apiserver-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.932851    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.933385    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-240000
	I0328 01:33:25.933385    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.933385    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.933385    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.935852    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:25.936183    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Audit-Id: 31b40d30-0790-47d2-b4cb-f05e4189e561
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.936183    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.936183    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.936183    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.936703    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-240000","namespace":"kube-system","uid":"4a79ab06-2314-43bb-8e37-45b9aab24e4e","resourceVersion":"1953","creationTimestamp":"2024-03-28T01:07:31Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.mirror":"092744cdc60a216294790b52c372bdaa","kubernetes.io/config.seen":"2024-03-28T01:07:31.458008757Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0328 01:33:25.936925    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.936925    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.936925    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.936925    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.940824    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.941259    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Audit-Id: 1e88c2ce-a8c0-476b-bc4d-cbef2355dc7b
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.941259    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.941259    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.941259    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.941395    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.941395    6044 pod_ready.go:92] pod "kube-controller-manager-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.941395    6044 pod_ready.go:81] duration metric: took 8.5438ms for pod "kube-controller-manager-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.942059    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.942163    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-47rqg
	I0328 01:33:25.942209    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.942249    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.942249    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.945485    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:25.945892    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.945892    6044 round_trippers.go:580]     Audit-Id: d8b3859d-a319-40e1-9edd-ab754e7b7412
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.945934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.945934    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.945934    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.946186    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-47rqg","generateName":"kube-proxy-","namespace":"kube-system","uid":"22fd5683-834d-47ae-a5b4-1ed980514e1b","resourceVersion":"1926","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0328 01:33:25.946186    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:25.946186    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:25.946186    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:25.946186    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:25.959838    6044 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0328 01:33:25.959838    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:25.959838    6044 round_trippers.go:580]     Audit-Id: a6a22129-282e-46e4-a6d9-f8ae6fcb4f8a
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:25.959915    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:25.959915    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:25.959915    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:25 GMT
	I0328 01:33:25.960276    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:25.960576    6044 pod_ready.go:92] pod "kube-proxy-47rqg" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:25.960576    6044 pod_ready.go:81] duration metric: took 18.5164ms for pod "kube-proxy-47rqg" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:25.960576    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.107823    6044 request.go:629] Waited for 146.8931ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:33:26.107986    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-55rch
	I0328 01:33:26.108079    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.108079    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.108079    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.112760    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.112839    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Audit-Id: 55311ac7-1fea-4d40-a4a9-0cd032216a29
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.112895    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.112895    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.112895    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.112895    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-55rch","generateName":"kube-proxy-","namespace":"kube-system","uid":"a96f841b-3e8f-42c1-be63-03914c0b90e8","resourceVersion":"1831","creationTimestamp":"2024-03-28T01:15:58Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:15:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:33:26.310240    6044 request.go:629] Waited for 196.3437ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:33:26.310452    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m03
	I0328 01:33:26.310452    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.310452    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.310571    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.314877    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.314877    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Audit-Id: 5c6c493c-a45d-451e-ada2-b34620109013
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.314877    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.314877    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.314877    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.315923    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m03","uid":"dbbc38c1-7871-4a48-98eb-4fd00b43bc22","resourceVersion":"2000","creationTimestamp":"2024-03-28T01:27:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_27_31_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:27:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4407 chars]
	I0328 01:33:26.316173    6044 pod_ready.go:97] node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:33:26.316173    6044 pod_ready.go:81] duration metric: took 355.5952ms for pod "kube-proxy-55rch" in "kube-system" namespace to be "Ready" ...
	E0328 01:33:26.316173    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m03" hosting pod "kube-proxy-55rch" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m03" has status "Ready":"Unknown"
	I0328 01:33:26.316173    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.512974    6044 request.go:629] Waited for 196.7991ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:33:26.512974    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-t88gz
	I0328 01:33:26.512974    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.512974    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.512974    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.520672    6044 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0328 01:33:26.521149    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.521149    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.521149    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.521149    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.521149    6044 round_trippers.go:580]     Audit-Id: 84904272-5dff-4ae6-98d0-edaa0989a44f
	I0328 01:33:26.521251    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.521251    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.521544    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-t88gz","generateName":"kube-proxy-","namespace":"kube-system","uid":"695603ac-ab24-4206-9728-342b2af018e4","resourceVersion":"2046","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"controller-revision-hash":"7659797656","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"386441f6-e376-4593-92ba-fa739207b68d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"386441f6-e376-4593-92ba-fa739207b68d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0328 01:33:26.715629    6044 request.go:629] Waited for 193.245ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:33:26.715629    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000-m02
	I0328 01:33:26.715629    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.715860    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.715860    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.719480    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:26.719480    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.720051    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.720051    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.720105    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.720105    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.720105    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.720105    6044 round_trippers.go:580]     Audit-Id: db922d7b-6b81-4f10-97a8-3f415d74ee4d
	I0328 01:33:26.720105    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000-m02","uid":"dcbe05d8-e31e-4891-a7f5-f1d6a1993934","resourceVersion":"2050","creationTimestamp":"2024-03-28T01:10:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_03_28T01_10_55_0700","minikube.k8s.io/version":"v1.33.0-beta.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:10:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-mana [truncated 4590 chars]
	I0328 01:33:26.720846    6044 pod_ready.go:97] node "multinode-240000-m02" hosting pod "kube-proxy-t88gz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m02" has status "Ready":"Unknown"
	I0328 01:33:26.720846    6044 pod_ready.go:81] duration metric: took 404.6697ms for pod "kube-proxy-t88gz" in "kube-system" namespace to be "Ready" ...
	E0328 01:33:26.720846    6044 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-240000-m02" hosting pod "kube-proxy-t88gz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-240000-m02" has status "Ready":"Unknown"
	I0328 01:33:26.720846    6044 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:26.916741    6044 request.go:629] Waited for 195.2064ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:33:26.916878    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-240000
	I0328 01:33:26.916878    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:26.916878    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:26.916878    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:26.921108    6044 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0328 01:33:26.921108    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Audit-Id: 04001a40-3617-4aa9-afcf-461b32414f73
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:26.921108    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:26.921108    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:26.921108    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:26 GMT
	I0328 01:33:26.921908    6044 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-240000","namespace":"kube-system","uid":"7670489f-fb6c-4b5f-80e8-5fe8de8d7d19","resourceVersion":"1966","creationTimestamp":"2024-03-28T01:07:28Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.mirror":"f5f9b00a2a0d8b16290abf555def0fb3","kubernetes.io/config.seen":"2024-03-28T01:07:21.513186595Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0328 01:33:27.119643    6044 request.go:629] Waited for 197.429ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:27.119962    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes/multinode-240000
	I0328 01:33:27.119962    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:27.119962    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:27.119962    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:27.123702    6044 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0328 01:33:27.123702    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:27.123702    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:27.123702    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:27 GMT
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Audit-Id: 074c09fb-8199-48a4-9987-29d324e2b7af
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:27.123702    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:27.124455    6044 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upda
te","apiVersion":"v1","time":"2024-03-28T01:07:26Z","fieldsType":"Field [truncated 5245 chars]
	I0328 01:33:27.125162    6044 pod_ready.go:92] pod "kube-scheduler-multinode-240000" in "kube-system" namespace has status "Ready":"True"
	I0328 01:33:27.125234    6044 pod_ready.go:81] duration metric: took 404.386ms for pod "kube-scheduler-multinode-240000" in "kube-system" namespace to be "Ready" ...
	I0328 01:33:27.125234    6044 pod_ready.go:38] duration metric: took 32.7464721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0328 01:33:27.125300    6044 api_server.go:52] waiting for apiserver process to appear ...
	I0328 01:33:27.135988    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:27.167532    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:27.167532    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:27.178699    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:27.205577    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:27.205577    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:27.215601    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:27.244506    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:27.244506    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:27.244506    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:27.255096    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:27.280610    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:27.280610    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:27.280610    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:27.289627    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:27.316168    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:27.316168    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:27.316168    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:27.325446    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:27.356038    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:27.356038    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:27.356038    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:27.364608    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:27.395264    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:27.395264    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:27.395264    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:27.395264    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:27.395264    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:27.440809    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:27.441768    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:27.442771    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:27.443760    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:27.443760    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:27.444759    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:27.445767    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.446766    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:27.472768    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:27.472768    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:27.515365    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515510    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515586    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515691    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:27.515719    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515807    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515827    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.515888    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.515972    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516034    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516034    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516085    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516620    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516699    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516762    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:27.516896    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.516896    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:27.516972    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517034    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517034    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517093    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:27.517159    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517186    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517245    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517275    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:27.517296    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517322    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517876    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517876    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.517990    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518061    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518080    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518148    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:27.518148    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518194    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:27.518242    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.518308    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.518327    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.519487    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520029    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520029    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:27.520090    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520169    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520238    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520297    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:27.520321    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520321    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520349    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520886    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.520947    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521480    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521527    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521637    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521701    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:27.521795    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521795    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521861    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521861    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521886    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521886    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:27.521915    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:27.522476    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522538    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522627    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.522627    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:27.522686    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.522756    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.522756    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.522797    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:27.522937    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523022    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523070    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523070    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523115    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523163    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523163    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523221    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:27.523221    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523246    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523292    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523326    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523342    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523399    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523459    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523483    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:27.523529    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:27.523551    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.523625    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:27.524219    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.524242    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.544537    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:27.544537    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:27.811318    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:27.811389    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:27.811389    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.811453    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:27.811531    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:27.811642    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:27.811686    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.811686    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.811775    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:27.811775    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:27.811775    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.811775    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.811775    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:27.811775    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.811775    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:19 +0000
	I0328 01:33:27.811775    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.811856    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:27.811856    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:27.811856    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:27.811918    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:27.811954    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:27.811954    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:27.811954    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.811954    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:27.811954    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:27.812015    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.812015    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.812042    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.812042    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.812042    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.812042    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.812042    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.812042    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.812107    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.812107    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.812107    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.812133    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.812133    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:27.812133    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:27.812133    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.812133    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.812259    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.812282    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.812282    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:27.812282    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:27.812282    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:27.812375    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.812375    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.812440    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:27.812440    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	I0328 01:33:27.812464    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:27.812575    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:27.812598    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.812598    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.812598    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:27.812598    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:27.812703    6044 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:27.812703    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:27.812758    6044 command_runner.go:130] > Events:
	I0328 01:33:27.812758    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:27.812783    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:27.812783    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  Starting                 74s                kubelet          Starting kubelet.
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:27.812813    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:27.812813    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.812813    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.812813    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:27.812813    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:27.812813    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:27.812813    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.812813    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.812813    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:27.812813    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.812813    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:27.812813    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.812813    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:27.812813    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:27.812813    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.813425    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.813425    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:27.813425    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:27.813425    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.813425    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.813425    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.813425    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.813425    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.813425    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.813425    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.813581    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.813581    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.813581    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.813581    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.813581    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.813581    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.813581    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:27.813581    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:27.813581    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:27.813689    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.813689    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.813713    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.813713    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:27.813713    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:27.813786    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:27.813786    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.813786    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.813786    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:27.813786    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0328 01:33:27.813892    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0328 01:33:27.813892    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.813892    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.813892    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:27.813892    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:27.813892    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:27.813965    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:27.813965    6044 command_runner.go:130] > Events:
	I0328 01:33:27.813965    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:27.814028    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:27.814054    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:27.814054    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:27.814085    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:27.814085    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:27.814124    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:27.814202    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:27.814256    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:27.814256    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:27.814256    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:27.814256    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:27.814256    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:27.814256    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:27.814256    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:27.814256    6044 command_runner.go:130] > Lease:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:27.814256    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:27.814256    6044 command_runner.go:130] > Conditions:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:27.814256    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:27.814256    6044 command_runner.go:130] > Addresses:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:27.814256    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:27.814256    6044 command_runner.go:130] > Capacity:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.814256    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.814256    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:27.814256    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:27.814256    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:27.814256    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:27.814256    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:27.814256    6044 command_runner.go:130] > System Info:
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:27.814786    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:27.814786    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:27.814786    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:27.814786    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:27.815051    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:27.815051    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:27.815051    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:27.815051    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:27.815051    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0328 01:33:27.815156    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0328 01:33:27.815185    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:27.815185    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:27.815185    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:27.815185    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:27.815185    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:27.815185    6044 command_runner.go:130] > Events:
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0328 01:33:27.815185    6044 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 17m                    kube-proxy       
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 5m54s                  kube-proxy       
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 17m                    kubelet          Starting kubelet.
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeReady                17m                    kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  Starting                 5m57s                  kubelet          Starting kubelet.
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m57s (x2 over 5m57s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  RegisteredNode           5m53s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeReady                5m51s                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  NodeNotReady             4m13s                  node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:27.815185    6044 command_runner.go:130] >   Normal  RegisteredNode           55s                    node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:27.826410    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:27.826410    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:27.860425    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:27.860801    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:27.861466    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:27.861859    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:27.861859    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:27.861932    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:27.861932    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:27.862022    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:27.862022    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:27.862050    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862050    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862050    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:27.862127    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862127    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:27.862127    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:27.862188    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:27.862188    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:27.862248    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862248    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:27.862314    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862314    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862314    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:27.862314    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:27.862381    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862441    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862456    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:27.862456    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:27.862519    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:27.862519    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:27.862585    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862610    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862610    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:27.862639    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:27.863178    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:27.863178    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:27.863225    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:27.863273    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:27.863273    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:27.863308    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:27.863381    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:27.863381    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:27.870477    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:27.870477    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:27.902923    6044 command_runner.go:130] > .:53
	I0328 01:33:27.902992    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:27.902992    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:27.902992    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:27.902992    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:27.903895    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:27.903960    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:27.935867    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:27.935867    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:27.936873    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:27.936873    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:27.939860    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:27.939860    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:28.061097    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:28.061097    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:28.061097    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         4 seconds ago        Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:28.061097    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:28.061097    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:28.061097    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:28.061097    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:28.061097    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:28.061097    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:28.061097    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:28.061097    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:28.061097    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:28.061097    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:28.061625    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:28.064088    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:28.064088    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:28.098061    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:28.098640    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:28.098725    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:28.098830    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:28.098906    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:28.098906    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:28.098953    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:28.099370    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:28.099505    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:28.099542    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:28.099579    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:28.099638    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:28.099719    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:28.099762    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:28.099762    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:28.099800    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:28.099800    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:28.099862    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:28.099886    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:28.107018    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:28.107018    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:28.155039    6044 command_runner.go:130] > .:53
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:28.156040    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:28.156040    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:28.156040    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:28.159082    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:28.159082    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:28.191039    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.191039    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.193509    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:28.193567    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:28.221714    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:28.222640    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:28.222698    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.222698    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:28.222748    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:28.222765    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:28.222765    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:28.223012    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:28.223266    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:28.225626    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:28.225626    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:28.255210    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.255210    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:28.255277    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:28.255337    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:28.255430    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:28.255522    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:28.255641    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:28.255641    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:28.255694    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:28.255714    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:28.255777    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:28.255777    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:28.255803    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:28.255920    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:28.255963    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:28.256036    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:28.256098    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:28.256209    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:28.256284    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:28.256348    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:28.256409    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:28.256433    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:28.256433    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:28.256461    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:28.257002    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:28.257002    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:28.257002    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:28.257049    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:28.257167    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:28.257236    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:28.257261    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:28.257291    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:28.257827    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:28.257827    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:28.257869    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:28.257985    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:28.257985    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:28.258047    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:28.258127    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:28.258199    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:28.258274    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:28.258364    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:28.258453    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:28.258518    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:28.258545    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:28.258575    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:28.259102    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:28.259158    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:28.259231    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:28.259258    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:28.259339    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:28.259402    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:28.259426    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259456    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:28.259981    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:28.259981    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:28.260028    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:28.260028    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:28.260083    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:28.277311    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:28.277311    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:28.311663    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:28.311755    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.311856    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.311884    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.311956    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:28.311980    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.312052    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:28.312078    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.312144    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:28.312170    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:28.312221    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:28.312257    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:28.312294    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:28.312389    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:28.312412    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:28.312412    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:28.312467    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:28.312493    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:28.313058    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313104    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313104    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:28.313162    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:28.313162    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:28.313239    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:28.313302    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:28.313355    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.313957    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:28.314483    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:28.314629    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:28.314629    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314693    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:28.314757    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:28.315354    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:28.315593    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:28.315692    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:28.315692    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:28.315752    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.315800    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:28.316369    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:28.316497    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:28.316527    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.316589    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317134    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317270    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317270    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317332    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.317856    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318022    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318080    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318142    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318142    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318207    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318207    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.318272    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.318272    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318375    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318430    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.318494    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.318593    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319118    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319174    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319174    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:28.319353    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:28.319409    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:28.369437    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:28.369437    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:28.396447    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:28.396447    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:28.398449    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:28.398449    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:28.430437    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:28.430437    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.431519    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.432445    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:28.433438    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:28.433438    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:28.445442    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:28.445442    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:28.476758    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:28.478931    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:28.478931    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:28.514708    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514708    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:28.514792    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:28.515328    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:28.515444    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:28.515444    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:28.515501    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:28.516030    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.516030    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.516077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:28.516077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516119    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516197    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516275    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516356    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516356    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516426    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516497    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516497    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516561    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516581    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:28.516655    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:28.516740    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:28.516763    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:28.516790    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:28.517922    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:28.518453    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:28.518453    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:28.518495    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:28.518609    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:28.518747    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:28.518809    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:28.518861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:28.519410    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519458    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519458    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519515    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519592    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519694    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519694    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519768    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519829    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519889    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519889    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.519945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:28.520006    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:28.520062    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:28.520062    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:28.520120    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:28.520177    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:28.520234    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:28.520289    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:28.520344    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:28.520344    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:28.520403    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:28.520466    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:28.520486    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:28.520542    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:28.520598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:28.520654    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:28.520782    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:28.520825    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:28.520825    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:28.520884    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:28.520908    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:28.520908    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:28.521023    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521050    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521570    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521623    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521719    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521797    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521880    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.521909    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522445    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522445    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.522564    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523091    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523207    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523304    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:28.523418    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:28.523484    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:31.076098    6044 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:33:31.104780    6044 command_runner.go:130] > 2032
	I0328 01:33:31.104860    6044 api_server.go:72] duration metric: took 1m6.101039s to wait for apiserver process to appear ...
	I0328 01:33:31.104924    6044 api_server.go:88] waiting for apiserver healthz status ...
	I0328 01:33:31.116927    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:31.147305    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:31.147829    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:31.158823    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:31.192779    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:31.192779    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:31.201778    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:31.228900    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:31.228900    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:31.229950    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:31.239808    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:31.275805    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:31.275805    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:31.275904    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:31.285038    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:31.312354    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:31.312354    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:31.312456    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:31.322705    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:31.349305    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:31.349305    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:31.349305    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:31.358926    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:31.386018    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:31.386081    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:31.386081    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:31.386143    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:31.386143    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:31.416544    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:31.416888    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:31.416957    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:31.417018    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:31.417044    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:31.417116    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:31.417176    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:31.419910    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:31.419910    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:31.448577    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:31.448577    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.449550    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.452546    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:31.452546    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:31.485860    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:31.486714    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.487735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:31.488735    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:31.489754    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.490718    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.491726    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.492735    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:31.493725    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:31.543378    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:31.544304    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:31.577377    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:31.579311    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:31.579636    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:31.579902    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:31.579902    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:31.580028    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:31.580146    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:31.580146    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580254    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.580254    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:31.580254    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580366    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:31.580366    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:31.580467    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:31.580467    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:31.580575    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:31.580575    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:31.580575    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:31.580575    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580748    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.580838    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:31.580838    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580933    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.580933    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:31.580933    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:31.580933    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581053    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581158    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:31.581216    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581262    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581262    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:31.581446    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:31.581560    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:31.581598    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581690    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581736    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:31.581736    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581803    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581868    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:31.581868    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.581927    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.581927    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:31.581990    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:31.582048    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:31.582110    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582110    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:31.582168    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:31.582168    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:31.582232    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:31.582290    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582290    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:31.582413    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:31.582413    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582478    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:31.582606    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:31.582663    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:31.582663    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.582728    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:31.582787    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:31.582787    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.582849    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:31.582888    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:31.582989    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:31.583095    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:31.583215    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:31.583328    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:31.583655    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:31.583882    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:31.584005    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:31.584119    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:31.584203    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:31.584281    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:31.584281    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:31.592104    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:31.592104    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:31.634287    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:31.634615    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:31.634698    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:31.634757    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:31.634757    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:31.637280    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:31.638395    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:31.638477    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:31.638553    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:31.638573    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:31.638694    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:31.639224    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:31.639346    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:31.647861    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:31.647861    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:31.681339    6044 command_runner.go:130] > .:53
	I0328 01:33:31.681339    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:31.681339    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:31.681339    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:31.681339    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:31.681339    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:31.681339    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:31.721923    6044 command_runner.go:130] > .:53
	I0328 01:33:31.721995    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:31.721995    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:31.721995    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:31.722081    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:31.722145    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:31.722196    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:31.722221    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:31.722278    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:31.722278    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:31.722330    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:31.722367    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:31.722367    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:31.722396    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:31.722422    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:31.722447    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:31.722447    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:31.722501    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:31.722526    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:31.722561    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:31.722593    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:31.722593    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:31.722621    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:31.722659    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:31.722697    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:31.722761    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:31.722803    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:31.722803    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:31.722863    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:31.722919    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:31.722943    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:31.722943    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:31.722970    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:31.722970    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:31.725730    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:31.725730    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:31.757501    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:31.758322    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:31.758389    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:31.758389    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:31.758389    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.758489    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:31.758489    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.758539    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.759527    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:31.759527    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.797130    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798012    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798171    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798195    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798273    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798342    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.798396    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.799641    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:31.800019    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800181    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800269    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:31.800269    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800299    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800455    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800548    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.800650    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.801665    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.802755    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.803643    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:31.804718    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:31.821684    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:31.821684    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:31.952615    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:31.952670    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:31.952670    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         7 seconds ago        Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:31.952670    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:31.952670    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:31.952670    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:31.952670    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:31.952670    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:31.952670    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:31.952670    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:31.952886    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:31.952914    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:31.952914    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:31.952914    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:31.952914    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:31.952914    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:31.952914    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:31.955426    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:31.955500    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:31.994056    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:31.994570    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:31.994633    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:31.994694    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:31.994730    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.994798    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.995727    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.995727    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995822    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:31.995908    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995986    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.995986    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996064    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996149    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.996149    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.996212    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996749    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996797    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996797    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:31.996867    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.996986    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:31.997531    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.997531    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:31.997609    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997609    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997732    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.997732    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:31.997798    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.997798    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:31.997922    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:31.997922    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:32.008543    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:32.008543    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:32.046602    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046652    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046740    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:32.046775    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:32.047327    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047327    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047377    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047430    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:32.047495    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047523    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.047552    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048077    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:32.048152    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:32.048713    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:32.048868    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:32.048954    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:32.049019    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:32.049083    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:32.049135    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:32.049222    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:32.049303    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:32.049373    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:32.049373    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:32.050165    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.050716    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.050831    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051155    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051155    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051251    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051317    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051379    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051379    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051444    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051509    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051572    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051596    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051596    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051652    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:32.051731    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:32.051759    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:32.051819    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:32.051844    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:32.051844    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:32.051913    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:32.052073    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:32.052231    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:32.052255    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:32.052339    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:32.052521    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:32.052652    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:32.052716    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:32.052716    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:32.052752    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.052817    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053243    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053397    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.053534    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054065    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054220    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054220    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054265    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054300    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054364    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:32.054894    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.054967    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.054993    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.054993    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055043    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:32.055068    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:32.055068    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.055112    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:32.091487    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:32.091487    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:32.368419    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:32.368465    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:32.368465    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.368465    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.368465    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:32.368547    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:32.368547    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.368671    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.368671    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.368671    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:32.368671    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:32.368671    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.368671    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.368671    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:32.368671    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.368773    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:30 +0000
	I0328 01:33:32.368773    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:32.368773    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:32.368773    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:32.368773    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:32.368773    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:32.368773    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:32.368773    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.368773    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.368773    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.368773    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.368773    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.368773    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.368773    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.369013    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.369013    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.369013    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.369013    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:32.369013    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:32.369013    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.369013    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.369119    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.369148    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.369148    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.369218    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.369218    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.369218    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:32.369218    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:32.369277    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:32.369277    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.369277    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.369277    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0328 01:33:32.369368    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	I0328 01:33:32.369449    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	I0328 01:33:32.369468    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:32.369468    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	I0328 01:33:32.369526    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:32.369526    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.369526    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.369526    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:32.369592    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:32.369592    6044 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:32.369592    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:32.369653    6044 command_runner.go:130] > Events:
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:32.369653    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 69s                kube-proxy       
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369653    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369777    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369846    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369870    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369870    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:32.369897    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:32.369897    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.369897    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.369897    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:32.369897    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:32.369897    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:32.369897    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.369897    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:32.369897    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.369897    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:32.369897    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:32.369897    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:32.369897    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.369897    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.369897    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:32.369897    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:32.370438    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.370438    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.370438    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.370438    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.370484    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.370484    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.370484    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.370484    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.370484    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.370529    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.370529    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.370556    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.370556    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.370556    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:32.370556    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.370623    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.370623    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.370701    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:32.370701    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:32.370701    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:32.370701    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.370760    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.370760    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:32.370760    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0328 01:33:32.370760    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0328 01:33:32.370826    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.370826    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.370826    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:32.370826    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:32.370826    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:32.370884    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:32.370884    6044 command_runner.go:130] > Events:
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:32.370884    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:32.370884    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:32.371015    6044 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:32.371015    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:32.371015    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:32.371015    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:32.371015    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:32.371143    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:32.371143    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:32.371220    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:32.371247    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:32.371247    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:32.371247    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:32.371247    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:32.371247    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:32.371247    6044 command_runner.go:130] > Lease:
	I0328 01:33:32.371247    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:32.371330    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:32.371330    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:32.371330    6044 command_runner.go:130] > Conditions:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:32.371330    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:32.371330    6044 command_runner.go:130] > Addresses:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:32.371330    6044 command_runner.go:130] > Capacity:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.371330    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:32.371330    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:32.371330    6044 command_runner.go:130] > System Info:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:32.371330    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:32.371330    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:32.371330    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:32.371330    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:32.371330    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:32.371330    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:32.371330    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:32.371330    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0328 01:33:32.371330    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0328 01:33:32.371330    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:32.371330    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:32.371864    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:32.371864    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:32.371864    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:32.371907    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:32.371907    6044 command_runner.go:130] > Events:
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0328 01:33:32.371907    6044 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0328 01:33:32.371907    6044 command_runner.go:130] >   Normal  Starting                 5m59s                kube-proxy       
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  Starting                 17m                  kubelet          Starting kubelet.
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.371998    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  Starting                 6m2s                 kubelet          Starting kubelet.
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:32.372091    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m2s (x2 over 6m2s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m2s                 kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:32.372152    6044 command_runner.go:130] >   Normal  RegisteredNode           5m58s                node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  NodeReady                5m56s                kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  NodeNotReady             4m18s                node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:32.372216    6044 command_runner.go:130] >   Normal  RegisteredNode           60s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:32.382911    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:32.382911    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:32.418985    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:32.419060    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:32.419132    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.419201    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:32.419281    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:32.419342    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:32.419397    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:32.419942    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:32.420047    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:32.420142    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:32.420216    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:32.420273    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:32.420273    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:32.420297    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:32.420324    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:32.420854    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:32.420854    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:32.420939    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:32.420939    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:32.421005    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:32.421005    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:32.421079    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:32.421098    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:32.421157    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:32.421157    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:32.421208    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:32.421737    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:32.421824    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:32.421881    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:32.421905    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:32.421905    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:32.421953    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:32.421971    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:32.422030    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:32.422081    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:32.422081    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:32.422621    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:32.422696    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:32.422696    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:32.422752    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:32.422752    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:32.422785    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:32.422785    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:32.422829    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:32.422829    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:32.422890    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:32.422977    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.423005    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:32.423534    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:32.423578    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:32.423712    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:32.423849    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.424373    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:32.424443    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:32.424495    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.424533    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:32.424533    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.424578    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:32.424664    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:32.424756    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424785    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:32.424815    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:32.442682    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:32.442682    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:32.481704    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:32.482708    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:32.483721    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:32.484700    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:32.484700    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:32.485718    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:32.486719    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.491836    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.491915    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:32.491915    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:32.492041    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:32.492041    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492131    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492199    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492326    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:32.492413    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:32.492413    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:32.492470    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:32.492492    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.492520    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:32.515107    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:32.515107    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:32.542162    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:32.542162    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:32.542249    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:32.542317    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:32.542379    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:32.542379    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:32.542411    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:32.542458    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:32.542458    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:32.542498    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:32.542498    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:32.542498    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:32.542538    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:32.542538    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:32.542605    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:32.542668    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:32.542775    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:32.542832    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:32.542832    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:32.544407    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:32.544407    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:32.576041    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:32.577024    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.091248    6044 api_server.go:253] Checking apiserver healthz at https://172.28.229.19:8443/healthz ...
	I0328 01:33:35.099170    6044 api_server.go:279] https://172.28.229.19:8443/healthz returned 200:
	ok
	I0328 01:33:35.099859    6044 round_trippers.go:463] GET https://172.28.229.19:8443/version
	I0328 01:33:35.099859    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:35.099859    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:35.099859    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:35.101522    6044 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0328 01:33:35.101522    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:35.101522    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Content-Length: 263
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:35 GMT
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Audit-Id: 1e18aebc-88d9-4bca-a454-127886c4f63d
	I0328 01:33:35.101993    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:35.102055    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:35.102055    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:35.102055    6044 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "29",
	  "gitVersion": "v1.29.3",
	  "gitCommit": "6813625b7cd706db5bc7388921be03071e1a492d",
	  "gitTreeState": "clean",
	  "buildDate": "2024-03-14T23:58:36Z",
	  "goVersion": "go1.21.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0328 01:33:35.102055    6044 api_server.go:141] control plane version: v1.29.3
	I0328 01:33:35.102055    6044 api_server.go:131] duration metric: took 3.9971042s to wait for apiserver health ...
	I0328 01:33:35.102055    6044 system_pods.go:43] waiting for kube-system pods to appear ...
	I0328 01:33:35.113585    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0328 01:33:35.140665    6044 command_runner.go:130] > 6539c85e1b61
	I0328 01:33:35.141602    6044 logs.go:276] 1 containers: [6539c85e1b61]
	I0328 01:33:35.153084    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0328 01:33:35.179354    6044 command_runner.go:130] > ab4a76ecb029
	I0328 01:33:35.180316    6044 logs.go:276] 1 containers: [ab4a76ecb029]
	I0328 01:33:35.194762    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0328 01:33:35.231631    6044 command_runner.go:130] > e6a5a75ec447
	I0328 01:33:35.231975    6044 command_runner.go:130] > 29e516c918ef
	I0328 01:33:35.232319    6044 logs.go:276] 2 containers: [e6a5a75ec447 29e516c918ef]
	I0328 01:33:35.243219    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0328 01:33:35.279599    6044 command_runner.go:130] > bc83a37dbd03
	I0328 01:33:35.279677    6044 command_runner.go:130] > 7061eab02790
	I0328 01:33:35.279743    6044 logs.go:276] 2 containers: [bc83a37dbd03 7061eab02790]
	I0328 01:33:35.289046    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0328 01:33:35.317591    6044 command_runner.go:130] > 7c9638784c60
	I0328 01:33:35.321722    6044 command_runner.go:130] > bb0b3c542264
	I0328 01:33:35.321722    6044 logs.go:276] 2 containers: [7c9638784c60 bb0b3c542264]
	I0328 01:33:35.332818    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0328 01:33:35.355279    6044 command_runner.go:130] > ceaccf323dee
	I0328 01:33:35.355279    6044 command_runner.go:130] > 1aa05268773e
	I0328 01:33:35.355279    6044 logs.go:276] 2 containers: [ceaccf323dee 1aa05268773e]
	I0328 01:33:35.365460    6044 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0328 01:33:35.389132    6044 command_runner.go:130] > ee99098e42fc
	I0328 01:33:35.389132    6044 command_runner.go:130] > dc9808261b21
	I0328 01:33:35.389132    6044 logs.go:276] 2 containers: [ee99098e42fc dc9808261b21]
	I0328 01:33:35.389611    6044 logs.go:123] Gathering logs for coredns [e6a5a75ec447] ...
	I0328 01:33:35.389611    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e6a5a75ec447"
	I0328 01:33:35.419889    6044 command_runner.go:130] > .:53
	I0328 01:33:35.420772    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:35.420772    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:35.420772    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:35.420772    6044 command_runner.go:130] > [INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	I0328 01:33:35.421089    6044 logs.go:123] Gathering logs for kindnet [dc9808261b21] ...
	I0328 01:33:35.421140    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dc9808261b21"
	I0328 01:33:35.448022    6044 command_runner.go:130] ! I0328 01:18:33.819057       1 main.go:227] handling current node
	I0328 01:33:35.448022    6044 command_runner.go:130] ! I0328 01:18:33.819073       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.448513    6044 command_runner.go:130] ! I0328 01:18:33.819080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.448571    6044 command_runner.go:130] ! I0328 01:18:33.819256       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.448571    6044 command_runner.go:130] ! I0328 01:18:33.819279       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.448634    6044 command_runner.go:130] ! I0328 01:18:43.840507       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.448696    6044 command_runner.go:130] ! I0328 01:18:43.840617       1 main.go:227] handling current node
	I0328 01:33:35.448804    6044 command_runner.go:130] ! I0328 01:18:43.840633       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.448963    6044 command_runner.go:130] ! I0328 01:18:43.840643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.448963    6044 command_runner.go:130] ! I0328 01:18:43.841217       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449104    6044 command_runner.go:130] ! I0328 01:18:43.841333       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.449253    6044 command_runner.go:130] ! I0328 01:18:53.861521       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.449253    6044 command_runner.go:130] ! I0328 01:18:53.861738       1 main.go:227] handling current node
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.861763       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.861779       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.864849       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:18:53.864869       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880199       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880733       1 main.go:227] handling current node
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880872       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.880900       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.449384    6044 command_runner.go:130] ! I0328 01:19:03.881505       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.449970    6044 command_runner.go:130] ! I0328 01:19:03.881543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.450039    6044 command_runner.go:130] ! I0328 01:19:13.889436       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.450099    6044 command_runner.go:130] ! I0328 01:19:13.889552       1 main.go:227] handling current node
	I0328 01:33:35.450141    6044 command_runner.go:130] ! I0328 01:19:13.889571       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453421    6044 command_runner.go:130] ! I0328 01:19:13.889581       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:13.889757       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:13.889789       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453500    6044 command_runner.go:130] ! I0328 01:19:23.898023       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453561    6044 command_runner.go:130] ! I0328 01:19:23.898229       1 main.go:227] handling current node
	I0328 01:33:35.453561    6044 command_runner.go:130] ! I0328 01:19:23.898245       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453625    6044 command_runner.go:130] ! I0328 01:19:23.898277       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453625    6044 command_runner.go:130] ! I0328 01:19:23.898405       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453696    6044 command_runner.go:130] ! I0328 01:19:23.898492       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453696    6044 command_runner.go:130] ! I0328 01:19:33.905977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453772    6044 command_runner.go:130] ! I0328 01:19:33.906123       1 main.go:227] handling current node
	I0328 01:33:35.453831    6044 command_runner.go:130] ! I0328 01:19:33.906157       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.453831    6044 command_runner.go:130] ! I0328 01:19:33.906167       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.453893    6044 command_runner.go:130] ! I0328 01:19:33.906618       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.453893    6044 command_runner.go:130] ! I0328 01:19:33.906762       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.453970    6044 command_runner.go:130] ! I0328 01:19:43.914797       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.453970    6044 command_runner.go:130] ! I0328 01:19:43.914849       1 main.go:227] handling current node
	I0328 01:33:35.454059    6044 command_runner.go:130] ! I0328 01:19:43.914863       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454059    6044 command_runner.go:130] ! I0328 01:19:43.914872       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454135    6044 command_runner.go:130] ! I0328 01:19:43.915508       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454210    6044 command_runner.go:130] ! I0328 01:19:43.915608       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454275    6044 command_runner.go:130] ! I0328 01:19:53.928273       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454275    6044 command_runner.go:130] ! I0328 01:19:53.928372       1 main.go:227] handling current node
	I0328 01:33:35.454353    6044 command_runner.go:130] ! I0328 01:19:53.928389       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454353    6044 command_runner.go:130] ! I0328 01:19:53.928398       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454442    6044 command_runner.go:130] ! I0328 01:19:53.928659       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454481    6044 command_runner.go:130] ! I0328 01:19:53.928813       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454525    6044 command_runner.go:130] ! I0328 01:20:03.943868       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454525    6044 command_runner.go:130] ! I0328 01:20:03.943974       1 main.go:227] handling current node
	I0328 01:33:35.454606    6044 command_runner.go:130] ! I0328 01:20:03.943995       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.454606    6044 command_runner.go:130] ! I0328 01:20:03.944004       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.454839    6044 command_runner.go:130] ! I0328 01:20:03.944882       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.454904    6044 command_runner.go:130] ! I0328 01:20:03.944986       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.454904    6044 command_runner.go:130] ! I0328 01:20:13.959538       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.454964    6044 command_runner.go:130] ! I0328 01:20:13.959588       1 main.go:227] handling current node
	I0328 01:33:35.455056    6044 command_runner.go:130] ! I0328 01:20:13.959601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455056    6044 command_runner.go:130] ! I0328 01:20:13.959609       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455114    6044 command_runner.go:130] ! I0328 01:20:13.960072       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455114    6044 command_runner.go:130] ! I0328 01:20:13.960245       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455175    6044 command_runner.go:130] ! I0328 01:20:23.967471       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455231    6044 command_runner.go:130] ! I0328 01:20:23.967523       1 main.go:227] handling current node
	I0328 01:33:35.455231    6044 command_runner.go:130] ! I0328 01:20:23.967537       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455291    6044 command_runner.go:130] ! I0328 01:20:23.967547       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455291    6044 command_runner.go:130] ! I0328 01:20:23.968155       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455347    6044 command_runner.go:130] ! I0328 01:20:23.968173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455347    6044 command_runner.go:130] ! I0328 01:20:33.977018       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455409    6044 command_runner.go:130] ! I0328 01:20:33.977224       1 main.go:227] handling current node
	I0328 01:33:35.455409    6044 command_runner.go:130] ! I0328 01:20:33.977259       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.977287       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.978024       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455487    6044 command_runner.go:130] ! I0328 01:20:33.978173       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455550    6044 command_runner.go:130] ! I0328 01:20:43.987057       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455681    6044 command_runner.go:130] ! I0328 01:20:43.987266       1 main.go:227] handling current node
	I0328 01:33:35.455681    6044 command_runner.go:130] ! I0328 01:20:43.987283       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455764    6044 command_runner.go:130] ! I0328 01:20:43.987293       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:43.987429       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:43.987462       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455824    6044 command_runner.go:130] ! I0328 01:20:53.994024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994070       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994120       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994132       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994628       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:20:53.994669       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.009908       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010006       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010023       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010033       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010413       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:04.010445       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024266       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024350       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024365       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024372       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024495       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:14.024525       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033056       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033221       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033244       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033254       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033447       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:24.033718       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054141       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054348       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054367       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.054377       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.056796       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:34.056838       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063011       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063388       1 main.go:227] handling current node
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063639       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.063794       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.064166       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.455879    6044 command_runner.go:130] ! I0328 01:21:44.064351       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456448    6044 command_runner.go:130] ! I0328 01:21:54.080807       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456448    6044 command_runner.go:130] ! I0328 01:21:54.080904       1 main.go:227] handling current node
	I0328 01:33:35.456505    6044 command_runner.go:130] ! I0328 01:21:54.080921       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456505    6044 command_runner.go:130] ! I0328 01:21:54.080930       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456570    6044 command_runner.go:130] ! I0328 01:21:54.081415       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456623    6044 command_runner.go:130] ! I0328 01:21:54.081491       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456623    6044 command_runner.go:130] ! I0328 01:22:04.094961       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456677    6044 command_runner.go:130] ! I0328 01:22:04.095397       1 main.go:227] handling current node
	I0328 01:33:35.456728    6044 command_runner.go:130] ! I0328 01:22:04.095905       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456728    6044 command_runner.go:130] ! I0328 01:22:04.096341       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456781    6044 command_runner.go:130] ! I0328 01:22:04.096776       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456781    6044 command_runner.go:130] ! I0328 01:22:04.096877       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.456833    6044 command_runner.go:130] ! I0328 01:22:14.117899       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.456888    6044 command_runner.go:130] ! I0328 01:22:14.118038       1 main.go:227] handling current node
	I0328 01:33:35.456888    6044 command_runner.go:130] ! I0328 01:22:14.118158       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.118310       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.118821       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.456953    6044 command_runner.go:130] ! I0328 01:22:14.119057       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457018    6044 command_runner.go:130] ! I0328 01:22:24.139816       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457018    6044 command_runner.go:130] ! I0328 01:22:24.140951       1 main.go:227] handling current node
	I0328 01:33:35.457080    6044 command_runner.go:130] ! I0328 01:22:24.140979       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457080    6044 command_runner.go:130] ! I0328 01:22:24.140991       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457137    6044 command_runner.go:130] ! I0328 01:22:24.141167       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457198    6044 command_runner.go:130] ! I0328 01:22:24.141178       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457253    6044 command_runner.go:130] ! I0328 01:22:34.156977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457253    6044 command_runner.go:130] ! I0328 01:22:34.157189       1 main.go:227] handling current node
	I0328 01:33:35.457313    6044 command_runner.go:130] ! I0328 01:22:34.157704       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457313    6044 command_runner.go:130] ! I0328 01:22:34.157819       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457368    6044 command_runner.go:130] ! I0328 01:22:34.158025       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457368    6044 command_runner.go:130] ! I0328 01:22:34.158059       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457428    6044 command_runner.go:130] ! I0328 01:22:44.166881       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457428    6044 command_runner.go:130] ! I0328 01:22:44.167061       1 main.go:227] handling current node
	I0328 01:33:35.457490    6044 command_runner.go:130] ! I0328 01:22:44.167232       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457554    6044 command_runner.go:130] ! I0328 01:22:44.167380       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457554    6044 command_runner.go:130] ! I0328 01:22:44.167748       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457668    6044 command_runner.go:130] ! I0328 01:22:44.167956       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457668    6044 command_runner.go:130] ! I0328 01:22:54.177031       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457734    6044 command_runner.go:130] ! I0328 01:22:54.177191       1 main.go:227] handling current node
	I0328 01:33:35.457734    6044 command_runner.go:130] ! I0328 01:22:54.177209       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457793    6044 command_runner.go:130] ! I0328 01:22:54.177218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457793    6044 command_runner.go:130] ! I0328 01:22:54.177774       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.457856    6044 command_runner.go:130] ! I0328 01:22:54.177906       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.457856    6044 command_runner.go:130] ! I0328 01:23:04.192931       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.457912    6044 command_runner.go:130] ! I0328 01:23:04.193190       1 main.go:227] handling current node
	I0328 01:33:35.457912    6044 command_runner.go:130] ! I0328 01:23:04.193208       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.457975    6044 command_runner.go:130] ! I0328 01:23:04.193218       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.457975    6044 command_runner.go:130] ! I0328 01:23:04.193613       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458034    6044 command_runner.go:130] ! I0328 01:23:04.193699       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458034    6044 command_runner.go:130] ! I0328 01:23:14.203281       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203390       1 main.go:227] handling current node
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203406       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458119    6044 command_runner.go:130] ! I0328 01:23:14.203415       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:14.204005       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:14.204201       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458194    6044 command_runner.go:130] ! I0328 01:23:24.220758       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458248    6044 command_runner.go:130] ! I0328 01:23:24.220806       1 main.go:227] handling current node
	I0328 01:33:35.458292    6044 command_runner.go:130] ! I0328 01:23:24.220822       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458332    6044 command_runner.go:130] ! I0328 01:23:24.220829       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:24.221546       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:24.221683       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228494       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228589       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228604       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.228613       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.229312       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:34.229577       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244452       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244582       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244601       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.244611       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.245136       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:44.245156       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250789       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250891       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250907       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.250915       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.251035       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:23:54.251227       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266634       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266650       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266659       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266860       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:04.266944       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281321       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281432       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281448       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281474       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281660       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:14.281692       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.289822       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290280       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290352       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290467       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290854       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:24.290943       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303810       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303934       1 main.go:227] handling current node
	I0328 01:33:35.458390    6044 command_runner.go:130] ! I0328 01:24:34.303965       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.458937    6044 command_runner.go:130] ! I0328 01:24:34.303998       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.458937    6044 command_runner.go:130] ! I0328 01:24:34.304417       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:34.304435       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.325930       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326037       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326055       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.326064       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.327133       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:44.327169       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342811       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342842       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342871       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.342878       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.343008       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:24:54.343016       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359597       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359702       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359718       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.359727       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.360480       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:04.360570       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.367988       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368593       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368613       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368623       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368889       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:14.368925       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402024       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402202       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402220       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402229       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402486       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:24.402522       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417358       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417459       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417485       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.417877       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:34.418025       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.434985       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.435206       1 main.go:227] handling current node
	I0328 01:33:35.459001    6044 command_runner.go:130] ! I0328 01:25:44.435441       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.435475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.435904       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:44.436000       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449873       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449975       1 main.go:227] handling current node
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449990       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.449999       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.450243       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459545    6044 command_runner.go:130] ! I0328 01:25:54.450388       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459748    6044 command_runner.go:130] ! I0328 01:26:04.463682       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463799       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463816       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463828       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463959       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:04.463990       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.470825       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471577       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471678       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.471692       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.472010       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:14.472170       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485860       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485913       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485944       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.485951       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.486383       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:24.486499       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502352       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502457       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502475       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502484       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502671       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:34.502731       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.515791       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.516785       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.517605       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.518163       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.518724       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:44.519042       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536706       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536762       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.536796       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537236       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537725       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:26:54.537823       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553753       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553802       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553813       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.553820       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.554279       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:04.554301       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572473       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572567       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572583       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572591       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572710       1 main.go:223] Handling node with IPs: map[172.28.230.180:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:14.572740       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.2.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.579996       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580041       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580053       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:24.580357       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590722       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590837       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590855       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.590864       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591158       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591426       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:34.591599       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598527       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598576       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598590       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.598597       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.599051       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:44.599199       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612380       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612492       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612511       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612521       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612644       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:27:54.612675       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619944       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619975       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619987       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.619994       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.620739       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:04.620826       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.637978       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.638455       1 main.go:227] handling current node
	I0328 01:33:35.459799    6044 command_runner.go:130] ! I0328 01:28:14.639024       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.639507       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.640025       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:14.640512       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.648901       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.649550       1 main.go:227] handling current node
	I0328 01:33:35.461036    6044 command_runner.go:130] ! I0328 01:28:24.649741       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461191    6044 command_runner.go:130] ! I0328 01:28:24.650198       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461247    6044 command_runner.go:130] ! I0328 01:28:24.650806       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461247    6044 command_runner.go:130] ! I0328 01:28:24.651143       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461304    6044 command_runner.go:130] ! I0328 01:28:34.657839       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461304    6044 command_runner.go:130] ! I0328 01:28:34.658038       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658054       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658080       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658271       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:34.658831       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666644       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666752       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666769       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.666778       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.667298       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:44.667513       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.679890       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.679999       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680015       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680023       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680512       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461357    6044 command_runner.go:130] ! I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461898    6044 command_runner.go:130] ! I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:35.461963    6044 command_runner.go:130] ! I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:35.481240    6044 logs.go:123] Gathering logs for Docker ...
	I0328 01:33:35.481240    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520344    6044 command_runner.go:130] > Mar 28 01:30:39 minikube cri-dockerd[221]: time="2024-03-28T01:30:39Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520549    6044 command_runner.go:130] > Mar 28 01:30:39 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube cri-dockerd[411]: time="2024-03-28T01:30:42Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:42 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube cri-dockerd[432]: time="2024-03-28T01:30:44Z" level=fatal msg="failed to get docker version: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:44 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:30:46 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.187514586Z" level=info msg="Starting up"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.188793924Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[661]: time="2024-03-28T01:31:35.190152365Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=667
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.231336402Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261679714Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:35.520665    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.261844319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262043225Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262141928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262784947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.262879050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263137658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263270562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263294463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:35.521262    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263307663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.263734076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.264531200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.267908401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268045005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268342414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.268438817Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:35.521519    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269089237Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269210440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.269296343Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:35.521739    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277331684Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277533790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277593492Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277648694Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277726596Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.277896701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:35.521886    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279273243Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279706256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.279852560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:35.522037    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280041166Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280280073Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280373676Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280594982Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522118    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280657284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522200    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280684285Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522200    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280713086Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280731986Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.280779288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281122598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522279    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281392306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281419307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281475909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281497309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522374    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281513210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281527910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281575712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522451    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281605113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281624613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281640414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281688915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.281906822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522527    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282137929Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282171230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282426837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282452838Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282645244Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282848450Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282869251Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282883451Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.282996354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.522625    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283034556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283048856Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283357365Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:35.522867    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283501170Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283575472Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:35 multinode-240000 dockerd[667]: time="2024-03-28T01:31:35.283615173Z" level=info msg="containerd successfully booted in 0.056485s"
	I0328 01:33:35.522961    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.252048243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.458814267Z" level=info msg="Loading containers: start."
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:36 multinode-240000 dockerd[661]: time="2024-03-28T01:31:36.940030727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:35.523001    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.031415390Z" level=info msg="Loading containers: done."
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.065830879Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.066918879Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126063860Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:35.523094    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 dockerd[661]: time="2024-03-28T01:31:37.126232160Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:31:37 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.977526069Z" level=info msg="Processing signal 'terminated'"
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.980026875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 systemd[1]: Stopping Docker Application Container Engine...
	I0328 01:33:35.523177    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981008678Z" level=info msg="Daemon shutdown complete"
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981100578Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:04 multinode-240000 dockerd[661]: time="2024-03-28T01:32:04.981126378Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: docker.service: Deactivated successfully.
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Stopped Docker Application Container Engine.
	I0328 01:33:35.523255    6044 command_runner.go:130] > Mar 28 01:32:05 multinode-240000 systemd[1]: Starting Docker Application Container Engine...
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.063559195Z" level=info msg="Starting up"
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.064631697Z" level=info msg="containerd not running, starting managed containerd"
	I0328 01:33:35.523335    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:06.065637900Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1057
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.100209087Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130085762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130208062Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0328 01:33:35.523421    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130256862Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130275562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130311762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523501    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130326962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130572163Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130673463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523580    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130696363Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0328 01:33:35.523693    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130764663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523726    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130798363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523726    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.130926864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134236672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134361772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134599073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.134797173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135068574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135093174Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135148374Z" level=info msg="metadata content store policy set" policy=shared
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135673176Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135920276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135946676Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135980176Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.135997376Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136050377Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136660078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136812179Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136923379Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136946979Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136964679Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.136991479Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137010579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137027279Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137099479Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137235380Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137265080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.523756    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137281180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137304080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137320180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137338080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137353080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137374080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524365    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137389280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524487    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137427380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137553380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137633981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137657481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524526    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137672181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524604    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137686281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524604    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137700481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524703    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137771381Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0328 01:33:35.524703    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137797181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137811481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137826081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0328 01:33:35.524781    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137953481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137975581Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.137988781Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0328 01:33:35.524861    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138001082Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138075582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138191982Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138211082Z" level=info msg="NRI interface is disabled by configuration."
	I0328 01:33:35.524945    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138597783Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0328 01:33:35.525025    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138694583Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0328 01:33:35.525025    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138839884Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:06 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:06.138866684Z" level=info msg="containerd successfully booted in 0.040774s"
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.114634333Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.151787026Z" level=info msg="Loading containers: start."
	I0328 01:33:35.525103    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.470888727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.559958251Z" level=info msg="Loading containers: done."
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.589960526Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.590109426Z" level=info msg="Daemon has completed initialization"
	I0328 01:33:35.525181    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638170147Z" level=info msg="API listen on /var/run/docker.sock"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 systemd[1]: Started Docker Application Container Engine.
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:07 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:07.638290047Z" level=info msg="API listen on [::]:2376"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0328 01:33:35.525259    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start docker client with request timeout 0s"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Loaded network plugin cni"
	I0328 01:33:35.525338    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker Info: &{ID:c06283fc-1f43-4b26-80be-81922335c5fe Containers:18 ContainersRunning:0 ContainersPaused:0 ContainersStopped:18 Images:10 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:27 OomKillDisable:false NGoroutines:49 SystemTime:2024-03-28T01:32:08.776685604Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:2 NEventsListener:0 Ke
rnelVersion:5.10.207 OperatingSystem:Buildroot 2023.02.9 OSVersion:2023.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0002cf3b0 NCPU:2 MemTotal:2216206336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:multinode-240000 Labels:[provider=hyperv] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dcf2847247e18caba8dce86522029642f60fe96b Expected:dcf2847247e18caba8dce86522029642f60fe96b} RuncCommit:{ID:51d5e94601ceffbbd85688df1c928ecccbfa4685 Expected:51d5e94601ceffbbd85688df1c928ecccbfa4685} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[nam
e=seccomp,profile=builtin name=cgroupns] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0328 01:33:35.525504    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0328 01:33:35.525598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:08Z" level=info msg="Start cri-dockerd grpc backend"
	I0328 01:33:35.525598    6044 command_runner.go:130] > Mar 28 01:32:08 multinode-240000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7fdf7869d9-ct428_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79\""
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-76f75df574-776ph_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a\""
	I0328 01:33:35.525678    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605075633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605218534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.605234734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525777    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.606038436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525852    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748289893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.525852    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748491293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748521793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.748642993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.525927    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3314134e34d83c71815af773bff505973dcb9797421f75a59b98862dc8bc69bf/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844158033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844387234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526002    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844509634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526075    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.844924435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526075    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862145778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862239979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862251979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526150    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:14.862457779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cf9dbbfda9ea6f2b61a134374c1f92196fe22bde8e166de86c62d863a2fbdb9/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526237    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196398617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196541018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526312    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.196606818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526386    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.199212424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526386    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279595426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279693326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.279767327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526484    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.280052327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526557    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393428912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526588    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393536412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393553112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.393951413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409559852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409616852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.409628953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:15.410047254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:19Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.444492990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.445565592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.461244632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.465433642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501034531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501100632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501129332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.501289432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552329460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552525461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.552550661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.526617    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:20.553090962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129523609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527147    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129601909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527252    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129619209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.129777210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142530242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142656442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.142692242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.143468544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:32:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.510503865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.512149169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515162977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:21.515941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1051]: time="2024-03-28T01:32:51.802252517Z" level=info msg="ignoring event" container=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.804266497Z" level=info msg="shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805357585Z" level=warning msg="cleaning up after shim disconnected" id=4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343 namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 dockerd[1057]: time="2024-03-28T01:32:51.805496484Z" level=info msg="cleaning up dead shim" namespace=moby
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040212718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.040328718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.041880913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:05 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:05.044028408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067078014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067134214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527279    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067145514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.067230414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074234221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074356921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074428021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.074678322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3a9caca4652153f4a871cbd85e3780df506a9ae46da758a86025933fbaed683/resolv.conf as [nameserver 172.28.224.1]"
	I0328 01:33:35.527810    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 cri-dockerd[1277]: time="2024-03-28T01:33:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57a41fbc578d50e83f1d23eab9fdc7d77f76594eb2d17300827b52b00008af13/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0328 01:33:35.527960    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.642121747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.528002    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644702250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.528058    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.644921750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528058    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.645074450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528111    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675693486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0328 01:33:35.528111    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675868286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0328 01:33:35.528168    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.675939787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:24 multinode-240000 dockerd[1057]: time="2024-03-28T01:33:24.676054087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528221    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:27 multinode-240000 dockerd[1051]: 2024/03/28 01:33:27 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528276    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528433    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528494    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528494    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528550    6044 command_runner.go:130] > Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528608    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528608    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528664    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528664    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528722    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528722    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.528771    6044 command_runner.go:130] > Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0328 01:33:35.562570    6044 logs.go:123] Gathering logs for kube-proxy [7c9638784c60] ...
	I0328 01:33:35.562570    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c9638784c60"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:35.591316    6044 command_runner.go:130] ! I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:35.591793    6044 command_runner.go:130] ! I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:35.591793    6044 command_runner.go:130] ! I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.591843    6044 command_runner.go:130] ! I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:35.591919    6044 command_runner.go:130] ! I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:35.592003    6044 command_runner.go:130] ! I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.594493    6044 logs.go:123] Gathering logs for kube-proxy [bb0b3c542264] ...
	I0328 01:33:35.594565    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb0b3c542264"
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:33:35.625075    6044 command_runner.go:130] ! I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.626019    6044 command_runner.go:130] ! I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:33:35.626119    6044 command_runner.go:130] ! I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:33:35.626212    6044 command_runner.go:130] ! I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:33:35.626212    6044 command_runner.go:130] ! I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	I0328 01:33:35.627181    6044 logs.go:123] Gathering logs for kube-controller-manager [ceaccf323dee] ...
	I0328 01:33:35.627181    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ceaccf323dee"
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.221400       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.938996       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:35.660540    6044 command_runner.go:130] ! I0328 01:32:17.939043       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.661547    6044 command_runner.go:130] ! I0328 01:32:17.943203       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.943369       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.944549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:17.944700       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:21.401842       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:35.661665    6044 command_runner.go:130] ! I0328 01:32:21.405585       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:35.661741    6044 command_runner.go:130] ! I0328 01:32:21.409924       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.410592       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.410608       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.415437       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.415588       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.423473       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.424183       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.424205       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:35.661818    6044 command_runner.go:130] ! I0328 01:32:21.428774       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:35.662914    6044 command_runner.go:130] ! I0328 01:32:21.429480       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:35.662978    6044 command_runner.go:130] ! I0328 01:32:21.429495       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:35.663009    6044 command_runner.go:130] ! I0328 01:32:21.434934       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:35.663009    6044 command_runner.go:130] ! I0328 01:32:21.435336       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.440600       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.440609       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:35.663112    6044 command_runner.go:130] ! I0328 01:32:21.447308       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.450160       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.450574       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:35.663264    6044 command_runner.go:130] ! I0328 01:32:21.459890       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:35.663361    6044 command_runner.go:130] ! I0328 01:32:21.463892       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:35.663361    6044 command_runner.go:130] ! I0328 01:32:21.464792       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.465478       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.467842       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.471786       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:35.663459    6044 command_runner.go:130] ! I0328 01:32:21.472200       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482388       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482635       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.482650       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:35.663597    6044 command_runner.go:130] ! I0328 01:32:21.506106       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:35.663749    6044 command_runner.go:130] ! I0328 01:32:21.543460       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.543999       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.544021       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.554383       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:35.663786    6044 command_runner.go:130] ! I0328 01:32:21.555541       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.555562       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587795       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587823       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:35.663946    6044 command_runner.go:130] ! I0328 01:32:21.587848       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.592263       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! E0328 01:32:21.607017       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.607046       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.629420       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.629868       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.633210       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.640307       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.640871       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.641527       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649017       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649755       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.649783       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.663585       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.666026       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.666316       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.701619       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705210       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705303       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.705318       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.710857       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.711002       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.711016       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.722757       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.723222       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.723310       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725677       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725696       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.725759       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726507       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726521       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.726539       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751095       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751136       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751164       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:35.664037    6044 command_runner.go:130] ! I0328 01:32:21.751048       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! E0328 01:32:21.760877       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.761111       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.770248       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:35.664586    6044 command_runner.go:130] ! I0328 01:32:21.771349       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.771929       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.788256       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:35.664746    6044 command_runner.go:130] ! I0328 01:32:21.788511       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:35.664832    6044 command_runner.go:130] ! I0328 01:32:21.788524       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.815523       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.815692       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:35.664893    6044 command_runner.go:130] ! I0328 01:32:21.816619       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873573       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873852       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873869       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:35.664989    6044 command_runner.go:130] ! I0328 01:32:21.873702       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.874098       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901041       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901450       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.901466       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.907150       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:35.665144    6044 command_runner.go:130] ! I0328 01:32:21.907285       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:35.665300    6044 command_runner.go:130] ! I0328 01:32:21.907294       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:35.665395    6044 command_runner.go:130] ! I0328 01:32:21.918008       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918049       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918077       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:35.665453    6044 command_runner.go:130] ! I0328 01:32:21.918277       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926280       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926334       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:35.665554    6044 command_runner.go:130] ! I0328 01:32:21.926586       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:21.926965       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:22.081182       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:35.665664    6044 command_runner.go:130] ! I0328 01:32:22.083797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:35.665764    6044 command_runner.go:130] ! I0328 01:32:22.084146       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:35.665764    6044 command_runner.go:130] ! I0328 01:32:22.084540       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:35.665851    6044 command_runner.go:130] ! W0328 01:32:22.084798       1 shared_informer.go:591] resyncPeriod 19h39m22.96948195s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:35.665851    6044 command_runner.go:130] ! I0328 01:32:22.085208       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:35.665851    6044 command_runner.go:130] ! I0328 01:32:22.085543       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:35.665964    6044 command_runner.go:130] ! I0328 01:32:22.085825       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:35.665964    6044 command_runner.go:130] ! I0328 01:32:22.086183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.086894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.087069       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:35.666077    6044 command_runner.go:130] ! I0328 01:32:22.087521       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:35.666192    6044 command_runner.go:130] ! I0328 01:32:22.087567       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:35.666247    6044 command_runner.go:130] ! W0328 01:32:22.087624       1 shared_informer.go:591] resyncPeriod 12h6m23.941100832s is smaller than resyncCheckPeriod 22h4m29.884091788s and the informer has already started. Changing it to 22h4m29.884091788s
	I0328 01:33:35.666310    6044 command_runner.go:130] ! I0328 01:32:22.087903       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:35.666355    6044 command_runner.go:130] ! I0328 01:32:22.088034       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.088275       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.088741       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:35.666411    6044 command_runner.go:130] ! I0328 01:32:22.089011       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:35.666526    6044 command_runner.go:130] ! I0328 01:32:22.104096       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:35.666526    6044 command_runner.go:130] ! I0328 01:32:22.124297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.131348       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.132084       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:35.666666    6044 command_runner.go:130] ! I0328 01:32:22.132998       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.133345       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.134354       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:35.666781    6044 command_runner.go:130] ! I0328 01:32:22.146807       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.147286       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.147508       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:35.666912    6044 command_runner.go:130] ! I0328 01:32:22.165018       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:35.667037    6044 command_runner.go:130] ! I0328 01:32:22.165501       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:35.667037    6044 command_runner.go:130] ! I0328 01:32:22.165846       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:35.667098    6044 command_runner.go:130] ! I0328 01:32:22.166330       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:35.667098    6044 command_runner.go:130] ! I0328 01:32:22.167894       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:35.667152    6044 command_runner.go:130] ! I0328 01:32:22.212429       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:35.667199    6044 command_runner.go:130] ! I0328 01:32:22.212522       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:35.667234    6044 command_runner.go:130] ! I0328 01:32:22.212533       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:35.667276    6044 command_runner.go:130] ! I0328 01:32:22.258526       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:35.667330    6044 command_runner.go:130] ! I0328 01:32:22.258865       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:35.667330    6044 command_runner.go:130] ! I0328 01:32:22.258907       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:35.667384    6044 command_runner.go:130] ! I0328 01:32:22.324062       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:35.667448    6044 command_runner.go:130] ! I0328 01:32:22.324128       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:35.667511    6044 command_runner.go:130] ! I0328 01:32:22.324137       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:35.667511    6044 command_runner.go:130] ! I0328 01:32:22.358296       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:35.667591    6044 command_runner.go:130] ! I0328 01:32:22.358367       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:35.667591    6044 command_runner.go:130] ! I0328 01:32:22.358377       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:35.667682    6044 command_runner.go:130] ! I0328 01:32:32.447083       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:35.667682    6044 command_runner.go:130] ! I0328 01:32:32.447529       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:35.667746    6044 command_runner.go:130] ! I0328 01:32:32.447619       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:35.667746    6044 command_runner.go:130] ! I0328 01:32:32.447221       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:35.668005    6044 command_runner.go:130] ! I0328 01:32:32.451626       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:35.668078    6044 command_runner.go:130] ! I0328 01:32:32.451960       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:35.668078    6044 command_runner.go:130] ! I0328 01:32:32.451695       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:35.668149    6044 command_runner.go:130] ! I0328 01:32:32.452296       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:35.668205    6044 command_runner.go:130] ! I0328 01:32:32.465613       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:35.668244    6044 command_runner.go:130] ! I0328 01:32:32.470233       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668290    6044 command_runner.go:130] ! I0328 01:32:32.470509       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668363    6044 command_runner.go:130] ! I0328 01:32:32.470641       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.668363    6044 command_runner.go:130] ! I0328 01:32:32.471011       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:35.668427    6044 command_runner.go:130] ! I0328 01:32:32.471142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:35.668703    6044 command_runner.go:130] ! I0328 01:32:32.471391       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:35.668703    6044 command_runner.go:130] ! I0328 01:32:32.496560       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:35.668764    6044 command_runner.go:130] ! I0328 01:32:32.507769       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:35.668764    6044 command_runner.go:130] ! I0328 01:32:32.513624       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:35.668838    6044 command_runner.go:130] ! I0328 01:32:32.518304       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.519904       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.524287       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:35.668931    6044 command_runner.go:130] ! I0328 01:32:32.529587       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:35.669012    6044 command_runner.go:130] ! I0328 01:32:32.531767       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.533493       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.549795       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.550526       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:35.669087    6044 command_runner.go:130] ! I0328 01:32:32.550874       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:35.669157    6044 command_runner.go:130] ! I0328 01:32:32.551065       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:35.669215    6044 command_runner.go:130] ! I0328 01:32:32.551152       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:35.669215    6044 command_runner.go:130] ! I0328 01:32:32.551255       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:35.669281    6044 command_runner.go:130] ! I0328 01:32:32.551308       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:35.669281    6044 command_runner.go:130] ! I0328 01:32:32.551340       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:35.669340    6044 command_runner.go:130] ! I0328 01:32:32.554992       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:35.669340    6044 command_runner.go:130] ! I0328 01:32:32.555603       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:35.669403    6044 command_runner.go:130] ! I0328 01:32:32.555933       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:35.669460    6044 command_runner.go:130] ! I0328 01:32:32.568824       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:35.669460    6044 command_runner.go:130] ! I0328 01:32:32.568944       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:35.669571    6044 command_runner.go:130] ! I0328 01:32:32.568985       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:35.669638    6044 command_runner.go:130] ! I0328 01:32:32.569031       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:35.669638    6044 command_runner.go:130] ! I0328 01:32:32.573248       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:35.669703    6044 command_runner.go:130] ! I0328 01:32:32.573552       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:35.669756    6044 command_runner.go:130] ! I0328 01:32:32.573778       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:35.669860    6044 command_runner.go:130] ! I0328 01:32:32.573567       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.573253       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.575355       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:35.669904    6044 command_runner.go:130] ! I0328 01:32:32.588982       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:35.669962    6044 command_runner.go:130] ! I0328 01:32:32.602942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:35.669962    6044 command_runner.go:130] ! I0328 01:32:32.605960       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:35.670038    6044 command_runner.go:130] ! I0328 01:32:32.607311       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:35.670107    6044 command_runner.go:130] ! I0328 01:32:32.607638       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:35.670162    6044 command_runner.go:130] ! I0328 01:32:32.608098       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:35.670209    6044 command_runner.go:130] ! I0328 01:32:32.608944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.132556ms"
	I0328 01:33:35.670267    6044 command_runner.go:130] ! I0328 01:32:32.609570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.623412ms"
	I0328 01:33:35.670328    6044 command_runner.go:130] ! I0328 01:32:32.610117       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0328 01:33:35.670406    6044 command_runner.go:130] ! I0328 01:32:32.611937       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:35.670466    6044 command_runner.go:130] ! I0328 01:32:32.612346       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="59.398µs"
	I0328 01:33:35.670466    6044 command_runner.go:130] ! I0328 01:32:32.612652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="32.799µs"
	I0328 01:33:35.670539    6044 command_runner.go:130] ! I0328 01:32:32.618783       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:35.670621    6044 command_runner.go:130] ! I0328 01:32:32.623971       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:35.670621    6044 command_runner.go:130] ! I0328 01:32:32.624286       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:35.670679    6044 command_runner.go:130] ! I0328 01:32:32.626634       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:35.670741    6044 command_runner.go:130] ! I0328 01:32:32.626831       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:35.670741    6044 command_runner.go:130] ! I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:35.670809    6044 command_runner.go:130] ! I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:35.670809    6044 command_runner.go:130] ! I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:35.670992    6044 command_runner.go:130] ! I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:35.671035    6044 command_runner.go:130] ! I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:35.671082    6044 command_runner.go:130] ! I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:35.671082    6044 command_runner.go:130] ! I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:35.671150    6044 command_runner.go:130] ! I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:35.671150    6044 command_runner.go:130] ! I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:35.671214    6044 command_runner.go:130] ! I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:35.671293    6044 command_runner.go:130] ! I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:35.671293    6044 command_runner.go:130] ! I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:33:35.671356    6044 command_runner.go:130] ! I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:33:35.671412    6044 command_runner.go:130] ! I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:35.671498    6044 command_runner.go:130] ! I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:35.671596    6044 command_runner.go:130] ! I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671646    6044 command_runner.go:130] ! I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:35.671646    6044 command_runner.go:130] ! I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:35.671750    6044 command_runner.go:130] ! I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:35.671818    6044 command_runner.go:130] ! I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	I0328 01:33:35.691045    6044 logs.go:123] Gathering logs for container status ...
	I0328 01:33:35.691045    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0328 01:33:35.792098    6044 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0328 01:33:35.792235    6044 command_runner.go:130] > dea6e77fe6072       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	I0328 01:33:35.792285    6044 command_runner.go:130] > e6a5a75ec447f       cbb01a7bd410d                                                                                         11 seconds ago       Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	I0328 01:33:35.792285    6044 command_runner.go:130] > 64647587ffc1f       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:35.792339    6044 command_runner.go:130] > ee99098e42fc1       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	I0328 01:33:35.792339    6044 command_runner.go:130] > 4dcf03394ea80       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	I0328 01:33:35.792371    6044 command_runner.go:130] > 7c9638784c60f       a1d263b5dc5b0                                                                                         About a minute ago   Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	I0328 01:33:35.792371    6044 command_runner.go:130] > 6539c85e1b61f       39f995c9f1996                                                                                         About a minute ago   Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > ab4a76ecb029b       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > bc83a37dbd03c       8c390d98f50c0                                                                                         About a minute ago   Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	I0328 01:33:35.792418    6044 command_runner.go:130] > ceaccf323deed       6052a25da3f97                                                                                         About a minute ago   Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	I0328 01:33:35.792552    6044 command_runner.go:130] > a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	I0328 01:33:35.792552    6044 command_runner.go:130] > 29e516c918ef4       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	I0328 01:33:35.792552    6044 command_runner.go:130] > dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              25 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	I0328 01:33:35.792621    6044 command_runner.go:130] > bb0b3c5422645       a1d263b5dc5b0                                                                                         25 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	I0328 01:33:35.792646    6044 command_runner.go:130] > 1aa05268773e4       6052a25da3f97                                                                                         26 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	I0328 01:33:35.792700    6044 command_runner.go:130] > 7061eab02790d       8c390d98f50c0                                                                                         26 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	I0328 01:33:35.795584    6044 logs.go:123] Gathering logs for kubelet ...
	I0328 01:33:35.795696    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:09 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127138    1398 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127495    1398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: I0328 01:32:10.127845    1398 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1398]: E0328 01:32:10.128279    1398 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911342    1450 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911442    1450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: I0328 01:32:10.911822    1450 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 kubelet[1450]: E0328 01:32:10.911883    1450 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:10 multinode-240000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:11 multinode-240000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0328 01:33:35.832756    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568166    1533 server.go:487] "Kubelet version" kubeletVersion="v1.29.3"
	I0328 01:33:35.833815    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568590    1533 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.833815    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.568985    1533 server.go:919] "Client rotation is on, will bootstrap in background"
	I0328 01:33:35.833867    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.572343    1533 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0328 01:33:35.833867    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.590932    1533 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.833928    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.648763    1533 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0328 01:33:35.833962    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650098    1533 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0328 01:33:35.834053    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650393    1533 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","To
pologyManagerPolicyOptions":null}
	I0328 01:33:35.834119    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650479    1533 topology_manager.go:138] "Creating topology manager with none policy"
	I0328 01:33:35.834119    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.650495    1533 container_manager_linux.go:301] "Creating device plugin manager"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.652420    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654064    1533 kubelet.go:396] "Attempting to sync node with API server"
	I0328 01:33:35.834158    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654388    1533 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0328 01:33:35.834207    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.654468    1533 kubelet.go:312] "Adding apiserver pod source"
	I0328 01:33:35.834247    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.655057    1533 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0328 01:33:35.834288    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.659987    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834326    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.660087    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834520    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.669074    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834558    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.669300    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.834614    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.674896    1533 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="docker" version="26.0.0" apiVersion="v1"
	I0328 01:33:35.834614    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.676909    1533 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0328 01:33:35.834655    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.677427    1533 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0328 01:33:35.834745    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.678180    1533 server.go:1256] "Started kubelet"
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.680600    1533 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.682066    1533 server.go:461] "Adding debug handlers to kubelet server"
	I0328 01:33:35.834786    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.683585    1533 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0328 01:33:35.834846    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.684672    1533 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0328 01:33:35.834925    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.686372    1533 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.28.229.19:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-240000.17c0c99ccc29b81f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-240000,UID:multinode-240000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-240000,},FirstTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,LastTimestamp:2024-03-28 01:32:13.678155807 +0000 UTC m=+0.237165597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-24
0000,}"
	I0328 01:33:35.834978    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.690229    1533 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0328 01:33:35.835036    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.708889    1533 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0328 01:33:35.835036    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.712930    1533 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
	I0328 01:33:35.835074    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.730166    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="200ms"
	I0328 01:33:35.835123    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.730938    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835123    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.731114    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835195    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.739149    1533 reconciler_new.go:29] "Reconciler: start to sync state"
	I0328 01:33:35.835195    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749138    1533 factory.go:221] Registration of the systemd container factory successfully
	I0328 01:33:35.835278    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.749449    1533 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.750189    1533 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.776861    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.786285    1533 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788142    1533 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.788369    1533 kubelet.go:2329] "Starting kubelet main sync loop"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.788778    1533 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: W0328 01:32:13.796114    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.796211    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819127    1533 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819290    1533 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.819423    1533 state_mem.go:36] "Initialized new in-memory state store"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.820373    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823600    1533 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823686    1533 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.823700    1533 policy_none.go:49] "None policy: Start"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.830073    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.831657    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843841    1533 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.843966    1533 state_mem.go:35] "Initializing new in-memory state store"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.844749    1533 state_mem.go:75] "Updated machine memory state"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.847245    1533 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0328 01:33:35.835305    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.848649    1533 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0328 01:33:35.835837    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890150    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="930fbfde452c0b2b3f13a6751fc648a70e87137f38175cb6dd161b40193b9a79"
	I0328 01:33:35.835880    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.890206    1533 topology_manager.go:215] "Topology Admit Handler" podUID="ada1864a97137760b3789cc738948aa2" podNamespace="kube-system" podName="kube-apiserver-multinode-240000"
	I0328 01:33:35.835880    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.908127    1533 topology_manager.go:215] "Topology Admit Handler" podUID="092744cdc60a216294790b52c372bdaa" podNamespace="kube-system" podName="kube-controller-manager-multinode-240000"
	I0328 01:33:35.835978    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.916258    1533 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-240000\" not found"
	I0328 01:33:35.836015    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.922354    1533 topology_manager.go:215] "Topology Admit Handler" podUID="f5f9b00a2a0d8b16290abf555def0fb3" podNamespace="kube-system" podName="kube-scheduler-multinode-240000"
	I0328 01:33:35.836064    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: E0328 01:32:13.932448    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="400ms"
	I0328 01:33:35.836101    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.941331    1533 topology_manager.go:215] "Topology Admit Handler" podUID="9f48c65a58defdbb87996760bf93b230" podNamespace="kube-system" podName="etcd-multinode-240000"
	I0328 01:33:35.836101    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953609    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b6f67390b0701700963eec28e4c4cc4aa0e852e4ec0f2392f0f6f5d9bdad52a"
	I0328 01:33:35.836150    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953654    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="763932cfdf0b0ce7a2df0bd78fe540ad8e5811cd74af29eee46932fb651a4df3"
	I0328 01:33:35.836186    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.953669    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae82cd0a848978d4fcc6941c33dd7fd18404e11e40d6b5d9f46484a6af7ec7d"
	I0328 01:33:35.836234    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966780    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836271    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.966955    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-ca-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836318    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967022    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-k8s-certs\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836361    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967064    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ada1864a97137760b3789cc738948aa2-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-240000\" (UID: \"ada1864a97137760b3789cc738948aa2\") " pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.836401    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967128    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-ca-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836483    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967158    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-flexvolume-dir\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967238    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-k8s-certs\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.967310    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/092744cdc60a216294790b52c372bdaa-kubeconfig\") pod \"kube-controller-manager-multinode-240000\" (UID: \"092744cdc60a216294790b52c372bdaa\") " pod="kube-system/kube-controller-manager-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.969606    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28426f4e9df5e7247fb25f1d5d48b9917e6d95d1f58292026ed0fde424835379"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:13 multinode-240000 kubelet[1533]: I0328 01:32:13.985622    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9ed3a20e88558fec102c7c331c667347b65f4c3d7d91740e135d71d8c45e6d"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.000616    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415d077c6f8104e5bc256b9c398a1cd3b34b68ae6ab02765cf3a8a5090c4b88"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.015792    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec77663c174f9dcbe665439298f2fb709a33fb88f7ac97c33834b5a202fe4540"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.042348    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20ff2ecb3a6dbfc2d1215de07989433af9d7d836214ecb1ab63afc9e48ef03ce"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.048339    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.049760    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.068959    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5f9b00a2a0d8b16290abf555def0fb3-kubeconfig\") pod \"kube-scheduler-multinode-240000\" (UID: \"f5f9b00a2a0d8b16290abf555def0fb3\") " pod="kube-system/kube-scheduler-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069009    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-certs\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.069204    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9f48c65a58defdbb87996760bf93b230-etcd-data\") pod \"etcd-multinode-240000\" (UID: \"9f48c65a58defdbb87996760bf93b230\") " pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.335282    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="800ms"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: I0328 01:32:14.463052    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.464639    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.765820    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.765926    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.836546    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: W0328 01:32:14.983409    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837124    6044 command_runner.go:130] > Mar 28 01:32:14 multinode-240000 kubelet[1533]: E0328 01:32:14.983490    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.093921    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7c4652074475872599900ce854e48425a373dfa665073bd9bfb56fa5330c0"
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.109197    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8780a18ab975521e6b1b20e4b7cffe786927f03654dd858b9d179f1d73d13d81"
	I0328 01:33:35.837170    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.138489    1533 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-240000?timeout=10s\": dial tcp 172.28.229.19:8443: connect: connection refused" interval="1.6s"
	I0328 01:33:35.837270    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.162611    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837309    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.162839    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-240000&limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837360    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: W0328 01:32:15.243486    1533 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837396    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.243618    1533 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.28.229.19:8443: connect: connection refused
	I0328 01:33:35.837443    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: I0328 01:32:15.300156    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.837478    6044 command_runner.go:130] > Mar 28 01:32:15 multinode-240000 kubelet[1533]: E0328 01:32:15.300985    1533 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.28.229.19:8443: connect: connection refused" node="multinode-240000"
	I0328 01:33:35.837478    6044 command_runner.go:130] > Mar 28 01:32:16 multinode-240000 kubelet[1533]: I0328 01:32:16.919859    1533 kubelet_node_status.go:73] "Attempting to register node" node="multinode-240000"
	I0328 01:33:35.837555    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.585350    1533 kubelet_node_status.go:112] "Node was previously registered" node="multinode-240000"
	I0328 01:33:35.837555    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.586142    1533 kubelet_node_status.go:76] "Successfully registered node" node="multinode-240000"
	I0328 01:33:35.837587    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.588202    1533 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0328 01:33:35.837623    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.589607    1533 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0328 01:33:35.837665    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.606942    1533 setters.go:568] "Node became not ready" node="multinode-240000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-03-28T01:32:19Z","lastTransitionTime":"2024-03-28T01:32:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0328 01:33:35.837665    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.664958    1533 apiserver.go:52] "Watching apiserver"
	I0328 01:33:35.837702    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.670955    1533 topology_manager.go:215] "Topology Admit Handler" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3" podNamespace="kube-system" podName="coredns-76f75df574-776ph"
	I0328 01:33:35.837762    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.671192    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.837798    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.671207    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/etcd-multinode-240000" podUID="8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93"
	I0328 01:33:35.837798    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672582    1533 topology_manager.go:215] "Topology Admit Handler" podUID="7c75e225-0e90-4916-bf27-a00a036e0955" podNamespace="kube-system" podName="kindnet-rwghf"
	I0328 01:33:35.837863    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672700    1533 topology_manager.go:215] "Topology Admit Handler" podUID="22fd5683-834d-47ae-a5b4-1ed980514e1b" podNamespace="kube-system" podName="kube-proxy-47rqg"
	I0328 01:33:35.837863    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672921    1533 topology_manager.go:215] "Topology Admit Handler" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f" podNamespace="kube-system" podName="storage-provisioner"
	I0328 01:33:35.837971    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.672997    1533 topology_manager.go:215] "Topology Admit Handler" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863" podNamespace="default" podName="busybox-7fdf7869d9-ct428"
	I0328 01:33:35.838169    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.673204    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.674661    1533 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-240000" podUID="7736298d-3898-4693-84bf-2311305bf52c"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.710220    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-240000"
	I0328 01:33:35.838211    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.714418    1533 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	I0328 01:33:35.838305    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725067    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-xtables-lock\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725144    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f-tmp\") pod \"storage-provisioner\" (UID: \"3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f\") " pod="kube-system/storage-provisioner"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725200    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-xtables-lock\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725237    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-cni-cfg\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725266    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c75e225-0e90-4916-bf27-a00a036e0955-lib-modules\") pod \"kindnet-rwghf\" (UID: \"7c75e225-0e90-4916-bf27-a00a036e0955\") " pod="kube-system/kindnet-rwghf"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.725305    1533 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22fd5683-834d-47ae-a5b4-1ed980514e1b-lib-modules\") pod \"kube-proxy-47rqg\" (UID: \"22fd5683-834d-47ae-a5b4-1ed980514e1b\") " pod="kube-system/kube-proxy-47rqg"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725432    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.725551    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.225500685 +0000 UTC m=+6.784510375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.727738    1533 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-240000"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.734766    1533 status_manager.go:877] "Failed to update status for pod" pod="kube-system/etcd-multinode-240000" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c9e76e4-ed9f-4595-aa5e-ddd6e74f4e93\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"$setElementOrder/hostIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"$setElementOrder/podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"cont
ainers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2024-03-28T01:32:14Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://ab4a76ecb029b98cd5b2c7ce34c9d81d5da9b76e6721e8e54059f840240fcb66\\\",\\\"image\\\":\\\"registry.k8s.io/etcd:3.5.12-0\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-03-28T01:32:15Z\\\"}}}],\\\"hostIP\\\":\\\"172.28.229.19\\\",\\\"hostIPs\\\"
:[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"podIP\\\":\\\"172.28.229.19\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"172.28.229.19\\\"},{\\\"$patch\\\":\\\"delete\\\",\\\"ip\\\":\\\"172.28.227.122\\\"}],\\\"startTime\\\":\\\"2024-03-28T01:32:14Z\\\"}}\" for pod \"kube-system\"/\"etcd-multinode-240000\": pods \"etcd-multinode-240000\" not found"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.799037    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b85a8adf05b50d7739532a291175d4" path="/var/lib/kubelet/pods/08b85a8adf05b50d7739532a291175d4/volumes"
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799563    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799591    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838332    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: E0328 01:32:19.799660    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:20.299638671 +0000 UTC m=+6.858648361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838942    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.802339    1533 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf911dad00226d1456d6201aff35c8b" path="/var/lib/kubelet/pods/3bf911dad00226d1456d6201aff35c8b/volumes"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949419    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-multinode-240000" podStartSLOduration=0.949323047 podStartE2EDuration="949.323047ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.919943873 +0000 UTC m=+6.478953663" watchObservedRunningTime="2024-03-28 01:32:19.949323047 +0000 UTC m=+6.508332737"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:19 multinode-240000 kubelet[1533]: I0328 01:32:19.949693    1533 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-240000" podStartSLOduration=0.949665448 podStartE2EDuration="949.665448ms" podCreationTimestamp="2024-03-28 01:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-28 01:32:19.941427427 +0000 UTC m=+6.500437217" watchObservedRunningTime="2024-03-28 01:32:19.949665448 +0000 UTC m=+6.508675138"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.230868    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.231013    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.230991954 +0000 UTC m=+7.790001744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331172    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331223    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: E0328 01:32:20.331292    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:21.331274305 +0000 UTC m=+7.890283995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.880883    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821d3cf9ae1a9ffce2f350e9ee239e00fd8743eb338fae8a5b39734fc9cabf5e"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:20 multinode-240000 kubelet[1533]: I0328 01:32:20.905234    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd01cb54b7d89aef97b057d7578bb34d4f58b0e2c9aacddeeff9fbb19db3cb6"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238101    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.238271    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.238201582 +0000 UTC m=+9.797211372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: I0328 01:32:21.272138    1533 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347f7ad7ebaed8796c8b12cf936e661c605c1c7a9dc02ccb15b4c682a96c1058"
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338941    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.838989    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.338996    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.339062    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:23.339043635 +0000 UTC m=+9.898053325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.791679    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:21 multinode-240000 kubelet[1533]: E0328 01:32:21.792217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.839583    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261654    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.839771    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.261858    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.261834961 +0000 UTC m=+13.820844751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.839771    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362225    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839855    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362265    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839855    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.362325    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:27.362305413 +0000 UTC m=+13.921315103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.839934    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790396    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840013    6044 command_runner.go:130] > Mar 28 01:32:23 multinode-240000 kubelet[1533]: E0328 01:32:23.790902    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840013    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790044    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840091    6044 command_runner.go:130] > Mar 28 01:32:25 multinode-240000 kubelet[1533]: E0328 01:32:25.790562    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840091    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292215    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.840199    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.292399    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.292355671 +0000 UTC m=+21.851365461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.840199    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393085    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840289    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393207    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840363    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.393270    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:35.393251521 +0000 UTC m=+21.952261211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.840363    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.791559    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840456    6044 command_runner.go:130] > Mar 28 01:32:27 multinode-240000 kubelet[1533]: E0328 01:32:27.792839    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840565    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.790087    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840565    6044 command_runner.go:130] > Mar 28 01:32:29 multinode-240000 kubelet[1533]: E0328 01:32:29.793138    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840643    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.791578    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840643    6044 command_runner.go:130] > Mar 28 01:32:31 multinode-240000 kubelet[1533]: E0328 01:32:31.792402    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.789342    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:33 multinode-240000 kubelet[1533]: E0328 01:32:33.790306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.358933    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.840851    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.359250    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.359180546 +0000 UTC m=+37.918190236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460013    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460054    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841431    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.460129    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:32:51.460096057 +0000 UTC m=+38.019105747 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.841568    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.790050    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:35 multinode-240000 kubelet[1533]: E0328 01:32:35.792176    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.791217    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:37 multinode-240000 kubelet[1533]: E0328 01:32:37.792228    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789082    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:39 multinode-240000 kubelet[1533]: E0328 01:32:39.789888    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.789933    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:41 multinode-240000 kubelet[1533]: E0328 01:32:41.790703    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.789453    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:43 multinode-240000 kubelet[1533]: E0328 01:32:43.790318    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.789795    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:45 multinode-240000 kubelet[1533]: E0328 01:32:45.790497    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.789306    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:47 multinode-240000 kubelet[1533]: E0328 01:32:47.790760    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.790669    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:49 multinode-240000 kubelet[1533]: E0328 01:32:49.800302    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.841602    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.398046    1533 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0328 01:33:35.842181    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	I0328 01:33:35.842332    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0328 01:33:35.842366    6044 command_runner.go:130] > Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	I0328 01:33:35.889902    6044 logs.go:123] Gathering logs for kube-apiserver [6539c85e1b61] ...
	I0328 01:33:35.889902    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6539c85e1b61"
	I0328 01:33:35.917732    6044 command_runner.go:130] ! I0328 01:32:16.440903       1 options.go:222] external host was not specified, using 172.28.229.19
	I0328 01:33:35.918668    6044 command_runner.go:130] ! I0328 01:32:16.443001       1 server.go:148] Version: v1.29.3
	I0328 01:33:35.918711    6044 command_runner.go:130] ! I0328 01:32:16.443211       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:35.918776    6044 command_runner.go:130] ! I0328 01:32:17.234065       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0328 01:33:35.918846    6044 command_runner.go:130] ! I0328 01:32:17.251028       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.252647       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.253295       1 instance.go:297] Using reconciler: lease
	I0328 01:33:35.918922    6044 command_runner.go:130] ! I0328 01:32:17.488371       1 handler.go:275] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:17.492937       1 genericapiserver.go:742] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:17.992938       1 handler.go:275] Adding GroupVersion  v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:17.993291       1 instance.go:693] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.498808       1 instance.go:693] API group "resource.k8s.io" is not enabled, skipping.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.513162       1 handler.go:275] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:18.513265       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! W0328 01:32:18.513276       1 genericapiserver.go:742] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919001    6044 command_runner.go:130] ! I0328 01:32:18.513869       1 handler.go:275] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0328 01:33:35.919195    6044 command_runner.go:130] ! W0328 01:32:18.513921       1 genericapiserver.go:742] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919253    6044 command_runner.go:130] ! I0328 01:32:18.515227       1 handler.go:275] Adding GroupVersion autoscaling v2 to ResourceManager
	I0328 01:33:35.919348    6044 command_runner.go:130] ! I0328 01:32:18.516586       1 handler.go:275] Adding GroupVersion autoscaling v1 to ResourceManager
	I0328 01:33:35.919391    6044 command_runner.go:130] ! W0328 01:32:18.516885       1 genericapiserver.go:742] Skipping API autoscaling/v2beta1 because it has no resources.
	I0328 01:33:35.919434    6044 command_runner.go:130] ! W0328 01:32:18.516898       1 genericapiserver.go:742] Skipping API autoscaling/v2beta2 because it has no resources.
	I0328 01:33:35.919533    6044 command_runner.go:130] ! I0328 01:32:18.519356       1 handler.go:275] Adding GroupVersion batch v1 to ResourceManager
	I0328 01:33:35.919590    6044 command_runner.go:130] ! W0328 01:32:18.519460       1 genericapiserver.go:742] Skipping API batch/v1beta1 because it has no resources.
	I0328 01:33:35.919590    6044 command_runner.go:130] ! I0328 01:32:18.520668       1 handler.go:275] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.520820       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.520830       1 genericapiserver.go:742] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919686    6044 command_runner.go:130] ! I0328 01:32:18.521802       1 handler.go:275] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0328 01:33:35.919686    6044 command_runner.go:130] ! W0328 01:32:18.521903       1 genericapiserver.go:742] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919798    6044 command_runner.go:130] ! W0328 01:32:18.521953       1 genericapiserver.go:742] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919798    6044 command_runner.go:130] ! I0328 01:32:18.523269       1 handler.go:275] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0328 01:33:35.919798    6044 command_runner.go:130] ! I0328 01:32:18.525859       1 handler.go:275] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0328 01:33:35.919912    6044 command_runner.go:130] ! W0328 01:32:18.525960       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.919946    6044 command_runner.go:130] ! W0328 01:32:18.525970       1 genericapiserver.go:742] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.919969    6044 command_runner.go:130] ! I0328 01:32:18.526646       1 handler.go:275] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0328 01:33:35.920017    6044 command_runner.go:130] ! W0328 01:32:18.526842       1 genericapiserver.go:742] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920050    6044 command_runner.go:130] ! W0328 01:32:18.526857       1 genericapiserver.go:742] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! I0328 01:32:18.527970       1 handler.go:275] Adding GroupVersion policy v1 to ResourceManager
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.528080       1 genericapiserver.go:742] Skipping API policy/v1beta1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! I0328 01:32:18.530546       1 handler.go:275] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.530652       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920072    6044 command_runner.go:130] ! W0328 01:32:18.530663       1 genericapiserver.go:742] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920154    6044 command_runner.go:130] ! I0328 01:32:18.531469       1 handler.go:275] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0328 01:33:35.920154    6044 command_runner.go:130] ! W0328 01:32:18.531576       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.531586       1 genericapiserver.go:742] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! I0328 01:32:18.534848       1 handler.go:275] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.534946       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! W0328 01:32:18.534974       1 genericapiserver.go:742] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920200    6044 command_runner.go:130] ! I0328 01:32:18.537355       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0328 01:33:35.920267    6044 command_runner.go:130] ! I0328 01:32:18.539242       1 handler.go:275] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0328 01:33:35.920313    6044 command_runner.go:130] ! W0328 01:32:18.539354       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0328 01:33:35.920351    6044 command_runner.go:130] ! W0328 01:32:18.539387       1 genericapiserver.go:742] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920408    6044 command_runner.go:130] ! I0328 01:32:18.545662       1 handler.go:275] Adding GroupVersion apps v1 to ResourceManager
	I0328 01:33:35.920542    6044 command_runner.go:130] ! W0328 01:32:18.545825       1 genericapiserver.go:742] Skipping API apps/v1beta2 because it has no resources.
	I0328 01:33:35.920628    6044 command_runner.go:130] ! W0328 01:32:18.545834       1 genericapiserver.go:742] Skipping API apps/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.547229       1 handler.go:275] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.547341       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.547350       1 genericapiserver.go:742] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.548292       1 handler.go:275] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.548390       1 genericapiserver.go:742] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:18.574598       1 handler.go:275] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! W0328 01:32:18.574814       1 genericapiserver.go:742] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.274952       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275081       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275445       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275546       1 secure_serving.go:213] Serving securely on [::]:8443
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.275631       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.276130       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.279110       1 available_controller.go:423] Starting AvailableConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.280530       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289454       1 controller.go:116] Starting legacy_token_tracking_controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289554       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.289661       1 aggregator.go:163] waiting for initial CRD sync...
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.291196       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.291542       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292314       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292353       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.292376       1 controller.go:78] Starting OpenAPI AggregationController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.293395       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.293575       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.279263       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.301011       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.301029       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.304174       1 controller.go:133] Starting OpenAPI controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.304213       1 controller.go:85] Starting OpenAPI V3 controller
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306745       1 naming_controller.go:291] Starting NamingConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306779       1 establishing_controller.go:76] Starting EstablishingController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306794       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306807       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0328 01:33:35.920663    6044 command_runner.go:130] ! I0328 01:32:19.306818       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0328 01:33:35.921221    6044 command_runner.go:130] ! I0328 01:32:19.279295       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0328 01:33:35.921221    6044 command_runner.go:130] ! I0328 01:32:19.279442       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.312069       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:35.921273    6044 command_runner.go:130] ! I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:35.921335    6044 command_runner.go:130] ! I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:33:35.921335    6044 command_runner.go:130] ! I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:33:35.921374    6044 command_runner.go:130] ! I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:33:35.921374    6044 command_runner.go:130] ! I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:33:35.921404    6044 command_runner.go:130] ! I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:33:35.921453    6044 command_runner.go:130] ! I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:33:35.921492    6044 command_runner.go:130] ! I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:33:35.921492    6044 command_runner.go:130] ! I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:33:35.921543    6044 command_runner.go:130] ! I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0328 01:33:35.921582    6044 command_runner.go:130] ! W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:33:35.921582    6044 command_runner.go:130] ! I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:33:35.921582    6044 command_runner.go:130] ! I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:33:35.921629    6044 command_runner.go:130] ! I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:33:35.921660    6044 command_runner.go:130] ! I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:33:35.921660    6044 command_runner.go:130] ! I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0328 01:33:35.921660    6044 command_runner.go:130] ! W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	I0328 01:33:35.935247    6044 logs.go:123] Gathering logs for etcd [ab4a76ecb029] ...
	I0328 01:33:35.935247    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab4a76ecb029"
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.724971Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.726473Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.28.229.19:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.28.229.19:2380","--initial-cluster=multinode-240000=https://172.28.229.19:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.28.229.19:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.28.229.19:2380","--name=multinode-240000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--prox
y-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727203Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.727384Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.727623Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.28.229.19:2380"]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.728158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:35.973581    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.738374Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"]}
	I0328 01:33:35.974717    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.74108Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-240000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial
-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0328 01:33:35.974766    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.764546Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"21.677054ms"}
	I0328 01:33:35.974832    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.798451Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0328 01:33:35.974874    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.829844Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","commit-index":2146}
	I0328 01:33:35.974936    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=()"}
	I0328 01:33:35.974936    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.830979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became follower at term 2"}
	I0328 01:33:35.974975    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.831279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8337aaa1903c5250 [peers: [], term: 2, commit: 2146, applied: 0, lastindex: 2146, lastterm: 2]"}
	I0328 01:33:35.974975    6044 command_runner.go:130] ! {"level":"warn","ts":"2024-03-28T01:32:15.847923Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0328 01:33:35.975063    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.855761Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1393}
	I0328 01:33:35.975063    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.869333Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1856}
	I0328 01:33:35.975112    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.878748Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0328 01:33:35.975151    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.88958Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"8337aaa1903c5250","timeout":"7s"}
	I0328 01:33:35.975201    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890509Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"8337aaa1903c5250"}
	I0328 01:33:35.975201    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.890567Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"8337aaa1903c5250","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0328 01:33:35.975239    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.891226Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0328 01:33:35.975283    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894393Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0328 01:33:35.975322    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0328 01:33:35.975371    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0328 01:33:35.975410    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0328 01:33:35.975543    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0328 01:33:35.975589    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0328 01:33:35.975629    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	I0328 01:33:35.975665    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	I0328 01:33:35.975665    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	I0328 01:33:35.975697    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	I0328 01:33:35.975845    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	I0328 01:33:35.975845    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0328 01:33:35.975910    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	I0328 01:33:35.976014    6044 command_runner.go:130] ! {"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0328 01:33:35.987092    6044 logs.go:123] Gathering logs for kube-scheduler [7061eab02790] ...
	I0328 01:33:35.987092    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7061eab02790"
	I0328 01:33:36.020438    6044 command_runner.go:130] ! I0328 01:07:24.655923       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.020501    6044 command_runner.go:130] ! W0328 01:07:26.955719       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:36.020565    6044 command_runner.go:130] ! W0328 01:07:26.956050       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020565    6044 command_runner.go:130] ! W0328 01:07:26.956340       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:36.020624    6044 command_runner.go:130] ! W0328 01:07:26.956518       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:36.020647    6044 command_runner.go:130] ! I0328 01:07:27.011654       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:36.020647    6044 command_runner.go:130] ! I0328 01:07:27.011702       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.020708    6044 command_runner.go:130] ! I0328 01:07:27.016073       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:36.020708    6044 command_runner.go:130] ! I0328 01:07:27.016395       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.020742    6044 command_runner.go:130] ! I0328 01:07:27.016638       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.041308       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.041400       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.041664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.043394       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! I0328 01:07:27.016423       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042004       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042140       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042578       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.042825       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.047881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.054199       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.054246       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.054853       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.054928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.055680       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.056176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.056445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.056649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.056923       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.057184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.057363       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! E0328 01:07:27.057575       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.020863    6044 command_runner.go:130] ! W0328 01:07:27.057920       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! E0328 01:07:27.058160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! W0328 01:07:27.058539       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! E0328 01:07:27.058924       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.021771    6044 command_runner.go:130] ! W0328 01:07:27.059533       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! E0328 01:07:27.060749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! W0328 01:07:27.927413       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.021937    6044 command_runner.go:130] ! E0328 01:07:27.927826       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0328 01:33:36.022012    6044 command_runner.go:130] ! W0328 01:07:28.013939       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.022056    6044 command_runner.go:130] ! E0328 01:07:28.014242       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0328 01:33:36.022122    6044 command_runner.go:130] ! W0328 01:07:28.056311       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.022164    6044 command_runner.go:130] ! E0328 01:07:28.058850       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.022164    6044 command_runner.go:130] ! W0328 01:07:28.076506       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.022242    6044 command_runner.go:130] ! E0328 01:07:28.076537       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0328 01:33:36.022242    6044 command_runner.go:130] ! W0328 01:07:28.106836       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! E0328 01:07:28.107081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.022320    6044 command_runner.go:130] ! E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0328 01:33:36.022444    6044 command_runner.go:130] ! W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022558    6044 command_runner.go:130] ! E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022558    6044 command_runner.go:130] ! W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.022645    6044 command_runner.go:130] ! E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0328 01:33:36.022786    6044 command_runner.go:130] ! W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022786    6044 command_runner.go:130] ! E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0328 01:33:36.022871    6044 command_runner.go:130] ! W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.022951    6044 command_runner.go:130] ! E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0328 01:33:36.022951    6044 command_runner.go:130] ! W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.023031    6044 command_runner.go:130] ! E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:33:36.023129    6044 command_runner.go:130] ! I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.023129    6044 command_runner.go:130] ! I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:33:36.023164    6044 command_runner.go:130] ! I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:33:36.023164    6044 command_runner.go:130] ! I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.023164    6044 command_runner.go:130] ! E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	I0328 01:33:36.034103    6044 logs.go:123] Gathering logs for kube-controller-manager [1aa05268773e] ...
	I0328 01:33:36.034103    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1aa05268773e"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:25.444563       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.119304       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.119639       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.122078       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:33:36.066167    6044 command_runner.go:130] ! I0328 01:07:26.122399       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:33:36.066295    6044 command_runner.go:130] ! I0328 01:07:26.123748       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.066295    6044 command_runner.go:130] ! I0328 01:07:26.124035       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0328 01:33:36.066343    6044 command_runner.go:130] ! I0328 01:07:29.961001       1 controllermanager.go:735] "Started controller" controller="serviceaccount-token-controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.961384       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.977654       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.978314       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0328 01:33:36.066369    6044 command_runner.go:130] ! I0328 01:07:29.978353       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.991603       1 controllermanager.go:735] "Started controller" controller="job-controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.992075       1 job_controller.go:224] "Starting job controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:29.992191       1 shared_informer.go:311] Waiting for caches to sync for job
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.016866       1 controllermanager.go:735] "Started controller" controller="cronjob-controller"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.017722       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0328 01:33:36.066447    6044 command_runner.go:130] ! I0328 01:07:30.017738       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032215       1 node_lifecycle_controller.go:425] "Controller will reconcile labels"
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032285       1 controllermanager.go:735] "Started controller" controller="node-lifecycle-controller"
	I0328 01:33:36.066529    6044 command_runner.go:130] ! I0328 01:07:30.032300       1 core.go:294] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032309       1 controllermanager.go:713] "Warning: skipping controller" controller="node-route-controller"
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032580       1 node_lifecycle_controller.go:459] "Sending events to api server"
	I0328 01:33:36.066609    6044 command_runner.go:130] ! I0328 01:07:30.032630       1 node_lifecycle_controller.go:470] "Starting node controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.032638       1 shared_informer.go:311] Waiting for caches to sync for taint
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.048026       1 controllermanager.go:735] "Started controller" controller="persistentvolume-protection-controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.048977       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.049064       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.062689       1 shared_informer.go:318] Caches are synced for tokens
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089724       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089888       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.089911       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0328 01:33:36.066717    6044 command_runner.go:130] ! W0328 01:07:30.089999       1 shared_informer.go:591] resyncPeriod 14h20m6.725226039s is smaller than resyncCheckPeriod 16h11m20.804614115s and the informer has already started. Changing it to 16h11m20.804614115s
	I0328 01:33:36.066717    6044 command_runner.go:130] ! I0328 01:07:30.090238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0328 01:33:36.066926    6044 command_runner.go:130] ! I0328 01:07:30.090386       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0328 01:33:36.066926    6044 command_runner.go:130] ! I0328 01:07:30.090486       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090728       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="horizontalpodautoscalers.autoscaling"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090833       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0328 01:33:36.067002    6044 command_runner.go:130] ! I0328 01:07:30.090916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091333       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091456       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091573       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0328 01:33:36.067068    6044 command_runner.go:130] ! I0328 01:07:30.091823       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0328 01:33:36.067131    6044 command_runner.go:130] ! I0328 01:07:30.091924       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092241       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092436       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0328 01:33:36.067177    6044 command_runner.go:130] ! I0328 01:07:30.092587       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="controllerrevisions.apps"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092720       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092907       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.092993       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.093270       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.095516       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.095735       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.117824       1 controllermanager.go:735] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0328 01:33:36.067236    6044 command_runner.go:130] ! I0328 01:07:30.117990       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.118005       1 shared_informer.go:311] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139352       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139526       1 disruption.go:433] "Sending events to api server."
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139561       1 disruption.go:444] "Starting disruption controller"
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.139568       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0328 01:33:36.067346    6044 command_runner.go:130] ! I0328 01:07:30.158607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.158860       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.158912       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.170615       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0328 01:33:36.067442    6044 command_runner.go:130] ! I0328 01:07:30.171245       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.171330       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.319254       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0328 01:33:36.067525    6044 command_runner.go:130] ! I0328 01:07:30.319305       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.319687       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.471941       1 controllermanager.go:735] "Started controller" controller="ttl-after-finished-controller"
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.472075       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="validatingadmissionpolicy-status-controller" requiredFeatureGates=["ValidatingAdmissionPolicy"]
	I0328 01:33:36.067606    6044 command_runner.go:130] ! I0328 01:07:30.472153       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0328 01:33:36.067672    6044 command_runner.go:130] ! I0328 01:07:30.472461       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0328 01:33:36.067695    6044 command_runner.go:130] ! I0328 01:07:30.621249       1 controllermanager.go:735] "Started controller" controller="pod-garbage-collector-controller"
	I0328 01:33:36.067695    6044 command_runner.go:130] ! I0328 01:07:30.621373       1 gc_controller.go:101] "Starting GC controller"
	I0328 01:33:36.067764    6044 command_runner.go:130] ! I0328 01:07:30.621385       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0328 01:33:36.067764    6044 command_runner.go:130] ! I0328 01:07:30.935875       1 controllermanager.go:735] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935911       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935949       1 horizontal.go:200] "Starting HPA controller"
	I0328 01:33:36.067879    6044 command_runner.go:130] ! I0328 01:07:30.935957       1 shared_informer.go:311] Waiting for caches to sync for HPA
	I0328 01:33:36.067949    6044 command_runner.go:130] ! I0328 01:07:31.068710       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.068846       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.220656       1 controllermanager.go:735] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0328 01:33:36.067974    6044 command_runner.go:130] ! I0328 01:07:31.220877       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0328 01:33:36.068028    6044 command_runner.go:130] ! I0328 01:07:31.220890       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.379912       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.380187       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0328 01:33:36.068054    6044 command_runner.go:130] ! I0328 01:07:31.380276       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0328 01:33:36.068105    6044 command_runner.go:130] ! I0328 01:07:31.525433       1 controllermanager.go:735] "Started controller" controller="replicationcontroller-controller"
	I0328 01:33:36.068130    6044 command_runner.go:130] ! I0328 01:07:31.525577       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0328 01:33:36.068130    6044 command_runner.go:130] ! I0328 01:07:31.525588       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0328 01:33:36.068182    6044 command_runner.go:130] ! I0328 01:07:31.690023       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0328 01:33:36.068182    6044 command_runner.go:130] ! I0328 01:07:31.690130       1 ttl_controller.go:124] "Starting TTL controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.690144       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828859       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828963       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.828970       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0328 01:33:36.068206    6044 command_runner.go:130] ! I0328 01:07:31.991678       1 controllermanager.go:735] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0328 01:33:36.068297    6044 command_runner.go:130] ! I0328 01:07:31.994944       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0328 01:33:36.068405    6044 command_runner.go:130] ! I0328 01:07:31.994881       1 attach_detach_controller.go:337] "Starting attach detach controller"
	I0328 01:33:36.068405    6044 command_runner.go:130] ! I0328 01:07:31.995033       1 shared_informer.go:311] Waiting for caches to sync for attach detach
	I0328 01:33:36.068466    6044 command_runner.go:130] ! I0328 01:07:32.040043       1 controllermanager.go:735] "Started controller" controller="taint-eviction-controller"
	I0328 01:33:36.068485    6044 command_runner.go:130] ! I0328 01:07:32.041773       1 taint_eviction.go:285] "Starting" controller="taint-eviction-controller"
	I0328 01:33:36.068485    6044 command_runner.go:130] ! I0328 01:07:32.041876       1 taint_eviction.go:291] "Sending events to api server"
	I0328 01:33:36.068841    6044 command_runner.go:130] ! I0328 01:07:32.041901       1 shared_informer.go:311] Waiting for caches to sync for taint-eviction-controller
	I0328 01:33:36.068918    6044 command_runner.go:130] ! I0328 01:07:32.281623       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.281708       1 namespace_controller.go:197] "Starting namespace controller"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.281718       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.316698       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0328 01:33:36.069024    6044 command_runner.go:130] ! I0328 01:07:32.316737       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.316772       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.322120       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0328 01:33:36.069093    6044 command_runner.go:130] ! I0328 01:07:32.322156       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.322181       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.327656       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0328 01:33:36.069197    6044 command_runner.go:130] ! I0328 01:07:32.327690       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:36.069264    6044 command_runner.go:130] ! I0328 01:07:32.327721       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331471       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331563       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0328 01:33:36.069289    6044 command_runner.go:130] ! I0328 01:07:32.331574       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0328 01:33:36.069342    6044 command_runner.go:130] ! I0328 01:07:32.331616       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0328 01:33:36.069342    6044 command_runner.go:130] ! E0328 01:07:32.365862       1 core.go:270] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0328 01:33:36.069342    6044 command_runner.go:130] ! I0328 01:07:32.365985       1 controllermanager.go:713] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0328 01:33:36.069405    6044 command_runner.go:130] ! I0328 01:07:32.366024       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520320       1 controllermanager.go:735] "Started controller" controller="endpointslice-controller"
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520407       1 endpointslice_controller.go:264] "Starting endpoint slice controller"
	I0328 01:33:36.069430    6044 command_runner.go:130] ! I0328 01:07:32.520419       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0328 01:33:36.069481    6044 command_runner.go:130] ! I0328 01:07:32.567130       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0328 01:33:36.069505    6044 command_runner.go:130] ! I0328 01:07:32.567208       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0328 01:33:36.069505    6044 command_runner.go:130] ! I0328 01:07:32.719261       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0328 01:33:36.069557    6044 command_runner.go:130] ! I0328 01:07:32.719392       1 stateful_set.go:161] "Starting stateful set controller"
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.719403       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.872730       1 controllermanager.go:735] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0328 01:33:36.069582    6044 command_runner.go:130] ! I0328 01:07:32.872869       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0328 01:33:36.069644    6044 command_runner.go:130] ! I0328 01:07:32.873455       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0328 01:33:36.069644    6044 command_runner.go:130] ! I0328 01:07:33.116208       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0328 01:33:36.069666    6044 command_runner.go:130] ! I0328 01:07:33.116233       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.116257       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.116280       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370650       1 controllermanager.go:735] "Started controller" controller="deployment-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370836       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.370851       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0328 01:33:36.069721    6044 command_runner.go:130] ! E0328 01:07:33.529036       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.529209       1 controllermanager.go:713] "Warning: skipping controller" controller="service-lb-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674381       1 controllermanager.go:735] "Started controller" controller="replicaset-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674638       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:33.674700       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.727895       1 range_allocator.go:111] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728282       1 controllermanager.go:735] "Started controller" controller="node-ipam-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728736       1 node_ipam_controller.go:160] "Starting ipam controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.728751       1 shared_informer.go:311] Waiting for caches to sync for node
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.743975       1 controllermanager.go:735] "Started controller" controller="persistentvolume-expander-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.744248       1 expand_controller.go:328] "Starting expand controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.744261       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.764054       1 controllermanager.go:735] "Started controller" controller="ephemeral-volume-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.765369       1 controller.go:169] "Starting ephemeral volume controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.765400       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801140       1 controllermanager.go:735] "Started controller" controller="endpoints-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801602       1 endpoints_controller.go:174] "Starting endpoint controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.801743       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.818031       1 controllermanager.go:735] "Started controller" controller="daemonset-controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.818707       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.820733       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.839571       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.887668       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.905965       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000\" does not exist"
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.917970       1 shared_informer.go:318] Caches are synced for cronjob
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.918581       1 shared_informer.go:318] Caches are synced for legacy-service-account-token-cleaner
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921260       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921573       1 shared_informer.go:318] Caches are synced for GC
	I0328 01:33:36.069721    6044 command_runner.go:130] ! I0328 01:07:43.921763       1 shared_informer.go:318] Caches are synced for stateful set
	I0328 01:33:36.070255    6044 command_runner.go:130] ! I0328 01:07:43.923599       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0328 01:33:36.070255    6044 command_runner.go:130] ! I0328 01:07:43.924267       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0328 01:33:36.070299    6044 command_runner.go:130] ! I0328 01:07:43.922298       1 shared_informer.go:318] Caches are synced for daemon sets
	I0328 01:33:36.070349    6044 command_runner.go:130] ! I0328 01:07:43.928013       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0328 01:33:36.070349    6044 command_runner.go:130] ! I0328 01:07:43.928774       1 shared_informer.go:318] Caches are synced for node
	I0328 01:33:36.070409    6044 command_runner.go:130] ! I0328 01:07:43.932324       1 range_allocator.go:174] "Sending events to api server"
	I0328 01:33:36.070409    6044 command_runner.go:130] ! I0328 01:07:43.932665       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0328 01:33:36.070443    6044 command_runner.go:130] ! I0328 01:07:43.932965       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0328 01:33:36.070443    6044 command_runner.go:130] ! I0328 01:07:43.933302       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.922308       1 shared_informer.go:318] Caches are synced for crt configmap
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.936175       1 shared_informer.go:318] Caches are synced for HPA
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.933370       1 shared_informer.go:318] Caches are synced for taint
	I0328 01:33:36.070482    6044 command_runner.go:130] ! I0328 01:07:43.936479       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0328 01:33:36.070535    6044 command_runner.go:130] ! I0328 01:07:43.936564       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000"
	I0328 01:33:36.070535    6044 command_runner.go:130] ! I0328 01:07:43.936602       1 node_lifecycle_controller.go:1026] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0328 01:33:36.070566    6044 command_runner.go:130] ! I0328 01:07:43.937774       1 event.go:376] "Event occurred" object="multinode-240000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000 event: Registered Node multinode-240000 in Controller"
	I0328 01:33:36.070599    6044 command_runner.go:130] ! I0328 01:07:43.945317       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0328 01:33:36.070650    6044 command_runner.go:130] ! I0328 01:07:43.945634       1 shared_informer.go:318] Caches are synced for expand
	I0328 01:33:36.070650    6044 command_runner.go:130] ! I0328 01:07:43.953475       1 shared_informer.go:318] Caches are synced for PV protection
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.955430       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000" podCIDRs=["10.244.0.0/24"]
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.967780       1 shared_informer.go:318] Caches are synced for ephemeral
	I0328 01:33:36.070731    6044 command_runner.go:130] ! I0328 01:07:43.970146       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.973346       1 shared_informer.go:318] Caches are synced for persistent volume
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.973608       1 shared_informer.go:318] Caches are synced for PVC protection
	I0328 01:33:36.070793    6044 command_runner.go:130] ! I0328 01:07:43.981178       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:33:36.070835    6044 command_runner.go:130] ! I0328 01:07:43.981918       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070926    6044 command_runner.go:130] ! I0328 01:07:43.981953       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070926    6044 command_runner.go:130] ! I0328 01:07:43.981962       1 event.go:376] "Event occurred" object="kube-system/etcd-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.070997    6044 command_runner.go:130] ! I0328 01:07:43.982017       1 shared_informer.go:318] Caches are synced for namespace
	I0328 01:33:36.070997    6044 command_runner.go:130] ! I0328 01:07:43.982124       1 shared_informer.go:318] Caches are synced for service account
	I0328 01:33:36.071032    6044 command_runner.go:130] ! I0328 01:07:43.983577       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-multinode-240000" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.071087    6044 command_runner.go:130] ! I0328 01:07:43.992236       1 shared_informer.go:318] Caches are synced for job
	I0328 01:33:36.071087    6044 command_runner.go:130] ! I0328 01:07:43.992438       1 shared_informer.go:318] Caches are synced for TTL
	I0328 01:33:36.071142    6044 command_runner.go:130] ! I0328 01:07:43.995152       1 shared_informer.go:318] Caches are synced for attach detach
	I0328 01:33:36.071142    6044 command_runner.go:130] ! I0328 01:07:44.003250       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.023343       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.023546       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0328 01:33:36.071176    6044 command_runner.go:130] ! I0328 01:07:44.030529       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0328 01:33:36.071228    6044 command_runner.go:130] ! I0328 01:07:44.032370       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:33:36.071228    6044 command_runner.go:130] ! I0328 01:07:44.039826       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:36.071269    6044 command_runner.go:130] ! I0328 01:07:44.039875       1 shared_informer.go:318] Caches are synced for disruption
	I0328 01:33:36.071320    6044 command_runner.go:130] ! I0328 01:07:44.059155       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:33:36.071361    6044 command_runner.go:130] ! I0328 01:07:44.071020       1 shared_informer.go:318] Caches are synced for deployment
	I0328 01:33:36.071405    6044 command_runner.go:130] ! I0328 01:07:44.074821       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0328 01:33:36.071405    6044 command_runner.go:130] ! I0328 01:07:44.095916       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.097596       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwghf"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.101053       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-47rqg"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.321636       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-76f75df574 to 2"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.505533       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-fgw8j"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.516581       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.516605       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.526884       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.626020       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-76f75df574-776ph"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.696026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="375.988233ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.735389       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="39.221627ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:44.735856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="390.399µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.456688       1 event.go:376] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-76f75df574 to 1 from 2"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.536906       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-76f75df574-fgw8j"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.583335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="126.427189ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.637187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="53.741283ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.710380       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.035205ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:45.710568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="73.7µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:57.839298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="81.8µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:57.891332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="135.3µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:58.938669       1 node_lifecycle_controller.go:1045] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:59.949779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="25.944009ms"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:07:59.950218       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="327.807µs"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:10:54.764176       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m02\" does not exist"
	I0328 01:33:36.071437    6044 command_runner.go:130] ! I0328 01:10:54.803820       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hsnfl"
	I0328 01:33:36.071978    6044 command_runner.go:130] ! I0328 01:10:54.803944       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t88gz"
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:54.804885       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m02" podCIDRs=["10.244.1.0/24"]
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:58.975442       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m02"
	I0328 01:33:36.072023    6044 command_runner.go:130] ! I0328 01:10:58.975715       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller"
	I0328 01:33:36.072085    6044 command_runner.go:130] ! I0328 01:11:17.665064       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072119    6044 command_runner.go:130] ! I0328 01:11:46.242165       1 event.go:376] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-7fdf7869d9 to 2"
	I0328 01:33:36.072161    6044 command_runner.go:130] ! I0328 01:11:46.265582       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-zgwm4"
	I0328 01:33:36.072161    6044 command_runner.go:130] ! I0328 01:11:46.287052       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-7fdf7869d9-ct428"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.306059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="64.440988ms"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.352353       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="46.180707ms"
	I0328 01:33:36.072200    6044 command_runner.go:130] ! I0328 01:11:46.354927       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="106.701µs"
	I0328 01:33:36.072252    6044 command_runner.go:130] ! I0328 01:11:46.380446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="75.4µs"
	I0328 01:33:36.072391    6044 command_runner.go:130] ! I0328 01:11:49.177937       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.338671ms"
	I0328 01:33:36.072391    6044 command_runner.go:130] ! I0328 01:11:49.178143       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.8µs"
	I0328 01:33:36.072442    6044 command_runner.go:130] ! I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:33:36.072442    6044 command_runner.go:130] ! I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:36.072475    6044 command_runner.go:130] ! I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073013    6044 command_runner.go:130] ! I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073092    6044 command_runner.go:130] ! I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:33:36.073190    6044 command_runner.go:130] ! I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:33:36.073190    6044 command_runner.go:130] ! I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.073224    6044 command_runner.go:130] ! I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:36.094272    6044 logs.go:123] Gathering logs for dmesg ...
	I0328 01:33:36.094272    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0328 01:33:36.122078    6044 command_runner.go:130] > [Mar28 01:30] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0328 01:33:36.122278    6044 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0328 01:33:36.122344    6044 command_runner.go:130] > [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0328 01:33:36.122344    6044 command_runner.go:130] > [  +0.141916] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.024106] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0328 01:33:36.122450    6044 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0328 01:33:36.122618    6044 command_runner.go:130] > [  +0.068008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0328 01:33:36.122618    6044 command_runner.go:130] > [  +0.027431] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0328 01:33:36.122618    6044 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0328 01:33:36.122741    6044 command_runner.go:130] > [  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0328 01:33:36.122940    6044 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0328 01:33:36.122940    6044 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0328 01:33:36.122940    6044 command_runner.go:130] > [Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	I0328 01:33:36.123124    6044 command_runner.go:130] > [  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	I0328 01:33:36.123172    6044 command_runner.go:130] > [Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	I0328 01:33:36.123276    6044 command_runner.go:130] > [  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	I0328 01:33:36.123332    6044 command_runner.go:130] > [  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	I0328 01:33:36.123362    6044 command_runner.go:130] > [  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	I0328 01:33:36.123504    6044 command_runner.go:130] > [  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	I0328 01:33:36.123625    6044 command_runner.go:130] > [  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	I0328 01:33:36.123670    6044 command_runner.go:130] > [  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	I0328 01:33:36.123767    6044 command_runner.go:130] > [  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	I0328 01:33:36.123767    6044 command_runner.go:130] > [  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	I0328 01:33:36.126127    6044 logs.go:123] Gathering logs for describe nodes ...
	I0328 01:33:36.126127    6044 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0328 01:33:36.372929    6044 command_runner.go:130] > Name:               multinode-240000
	I0328 01:33:36.372929    6044 command_runner.go:130] > Roles:              control-plane
	I0328 01:33:36.372929    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000
	I0328 01:33:36.373937    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.374024    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.374081    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.374081    6044 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0328 01:33:36.374116    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	I0328 01:33:36.374116    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.374158    6044 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0328 01:33:36.374158    6044 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0328 01:33:36.374200    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.374200    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.374200    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.374252    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	I0328 01:33:36.374252    6044 command_runner.go:130] > Taints:             <none>
	I0328 01:33:36.374252    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.374252    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.374252    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000
	I0328 01:33:36.374252    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.374252    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:33:30 +0000
	I0328 01:33:36.374327    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.374327    6044 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0328 01:33:36.374354    6044 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0328 01:33:36.374383    6044 command_runner.go:130] >   MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0328 01:33:36.374383    6044 command_runner.go:130] >   DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0328 01:33:36.374383    6044 command_runner.go:130] >   PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	I0328 01:33:36.374383    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   InternalIP:  172.28.229.19
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Hostname:    multinode-240000
	I0328 01:33:36.374383    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.374383    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.374383    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.374383    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.374383    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.374383    6044 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0328 01:33:36.374383    6044 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0328 01:33:36.374383    6044 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.374383    6044 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.374383    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-ct428                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 coredns-76f75df574-776ph                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 etcd-multinode-240000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         77s
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kindnet-rwghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-240000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-240000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-proxy-47rqg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-240000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	I0328 01:33:36.374383    6044 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	I0328 01:33:36.374383    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.374383    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.374383    6044 command_runner.go:130] >   Resource           Requests     Limits
	I0328 01:33:36.374907    6044 command_runner.go:130] >   --------           --------     ------
	I0328 01:33:36.374907    6044 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:36.374907    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0328 01:33:36.374907    6044 command_runner.go:130] > Events:
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:36.374987    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 25m                kube-proxy       
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    26m                kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     26m                kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 26m                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  RegisteredNode           25m                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeReady                25m                kubelet          Node multinode-240000 status is now: NodeReady
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.374987    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	I0328 01:33:36.374987    6044 command_runner.go:130] > Name:               multinode-240000-m02
	I0328 01:33:36.374987    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:36.374987    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m02
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.374987    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.374987    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.374987    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	I0328 01:33:36.375539    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:36.375539    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:36.375539    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.375539    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.375539    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m02
	I0328 01:33:36.375539    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.375539    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	I0328 01:33:36.375539    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.375539    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:36.375539    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:36.375539    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375539    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.375698    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   InternalIP:  172.28.230.250
	I0328 01:33:36.375754    6044 command_runner.go:130] >   Hostname:    multinode-240000-m02
	I0328 01:33:36.375754    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.375754    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.375754    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.375754    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.375754    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.375754    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.375754    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.375754    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.375839    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.375839    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	I0328 01:33:36.375839    6044 command_runner.go:130] >   System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.375839    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.375839    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.375902    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.375902    6044 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0328 01:33:36.375902    6044 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0328 01:33:36.375902    6044 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0328 01:33:36.375970    6044 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.375970    6044 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.375998    6044 command_runner.go:130] >   default                     busybox-7fdf7869d9-zgwm4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0328 01:33:36.375998    6044 command_runner.go:130] >   kube-system                 kindnet-hsnfl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      22m
	I0328 01:33:36.375998    6044 command_runner.go:130] >   kube-system                 kube-proxy-t88gz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0328 01:33:36.375998    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.376060    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.376078    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:36.376078    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:36.376102    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:36.376102    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:36.376102    6044 command_runner.go:130] > Events:
	I0328 01:33:36.376102    6044 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0328 01:33:36.376163    6044 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  Starting                 22m                kube-proxy       
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0328 01:33:36.376163    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	I0328 01:33:36.376223    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.376223    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x2 over 22m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  RegisteredNode           22m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:36.376254    6044 command_runner.go:130] >   Normal  NodeReady                22m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	I0328 01:33:36.376319    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	I0328 01:33:36.376345    6044 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	I0328 01:33:36.376373    6044 command_runner.go:130] > Name:               multinode-240000-m03
	I0328 01:33:36.376373    6044 command_runner.go:130] > Roles:              <none>
	I0328 01:33:36.376373    6044 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/hostname=multinode-240000-m03
	I0328 01:33:36.376373    6044 command_runner.go:130] >                     kubernetes.io/os=linux
	I0328 01:33:36.376451    6044 command_runner.go:130] >                     minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	I0328 01:33:36.376451    6044 command_runner.go:130] >                     minikube.k8s.io/name=multinode-240000
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0-beta.0
	I0328 01:33:36.376485    6044 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0328 01:33:36.376485    6044 command_runner.go:130] > CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	I0328 01:33:36.376485    6044 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0328 01:33:36.376485    6044 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0328 01:33:36.376485    6044 command_runner.go:130] > Unschedulable:      false
	I0328 01:33:36.376613    6044 command_runner.go:130] > Lease:
	I0328 01:33:36.376613    6044 command_runner.go:130] >   HolderIdentity:  multinode-240000-m03
	I0328 01:33:36.376634    6044 command_runner.go:130] >   AcquireTime:     <unset>
	I0328 01:33:36.376634    6044 command_runner.go:130] >   RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	I0328 01:33:36.376634    6044 command_runner.go:130] > Conditions:
	I0328 01:33:36.376634    6044 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0328 01:33:36.376705    6044 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0328 01:33:36.376705    6044 command_runner.go:130] >   MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376705    6044 command_runner.go:130] >   DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] >   PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] >   Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0328 01:33:36.376772    6044 command_runner.go:130] > Addresses:
	I0328 01:33:36.376772    6044 command_runner.go:130] >   InternalIP:  172.28.224.172
	I0328 01:33:36.376772    6044 command_runner.go:130] >   Hostname:    multinode-240000-m03
	I0328 01:33:36.376772    6044 command_runner.go:130] > Capacity:
	I0328 01:33:36.376772    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.376772    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.376772    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.376861    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.376861    6044 command_runner.go:130] > Allocatable:
	I0328 01:33:36.376861    6044 command_runner.go:130] >   cpu:                2
	I0328 01:33:36.376861    6044 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   hugepages-2Mi:      0
	I0328 01:33:36.376861    6044 command_runner.go:130] >   memory:             2164264Ki
	I0328 01:33:36.376861    6044 command_runner.go:130] >   pods:               110
	I0328 01:33:36.376935    6044 command_runner.go:130] > System Info:
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Machine ID:                 53e5a22090614654950f5f4d91307651
	I0328 01:33:36.376935    6044 command_runner.go:130] >   System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	I0328 01:33:36.376935    6044 command_runner.go:130] >   Kernel Version:             5.10.207
	I0328 01:33:36.376991    6044 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0328 01:33:36.376991    6044 command_runner.go:130] >   Operating System:           linux
	I0328 01:33:36.377018    6044 command_runner.go:130] >   Architecture:               amd64
	I0328 01:33:36.377018    6044 command_runner.go:130] >   Container Runtime Version:  docker://26.0.0
	I0328 01:33:36.377048    6044 command_runner.go:130] >   Kubelet Version:            v1.29.3
	I0328 01:33:36.377048    6044 command_runner.go:130] >   Kube-Proxy Version:         v1.29.3
	I0328 01:33:36.377048    6044 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0328 01:33:36.377080    6044 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0328 01:33:36.377080    6044 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0328 01:33:36.377115    6044 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0328 01:33:36.377143    6044 command_runner.go:130] >   kube-system                 kindnet-jvgx2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	I0328 01:33:36.377143    6044 command_runner.go:130] >   kube-system                 kube-proxy-55rch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	I0328 01:33:36.377143    6044 command_runner.go:130] > Allocated resources:
	I0328 01:33:36.377143    6044 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Resource           Requests   Limits
	I0328 01:33:36.377143    6044 command_runner.go:130] >   --------           --------   ------
	I0328 01:33:36.377143    6044 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:36.377143    6044 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0328 01:33:36.377143    6044 command_runner.go:130] > Events:
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Type    Reason                   Age                  From             Message
	I0328 01:33:36.377143    6044 command_runner.go:130] >   ----    ------                   ----                 ----             -------
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 17m                  kube-proxy       
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 6m3s                 kube-proxy       
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 17m                  kubelet          Starting kubelet.
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)    kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeReady                17m                  kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  Starting                 6m6s                 kubelet          Starting kubelet.
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  RegisteredNode           6m2s                 node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeReady                6m                   kubelet          Node multinode-240000-m03 status is now: NodeReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  NodeNotReady             4m22s                node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	I0328 01:33:36.377143    6044 command_runner.go:130] >   Normal  RegisteredNode           64s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	I0328 01:33:36.389824    6044 logs.go:123] Gathering logs for coredns [29e516c918ef] ...
	I0328 01:33:36.389824    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29e516c918ef"
	I0328 01:33:36.421793    6044 command_runner.go:130] > .:53
	I0328 01:33:36.422599    6044 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	I0328 01:33:36.422599    6044 command_runner.go:130] > CoreDNS-1.11.1
	I0328 01:33:36.422632    6044 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 127.0.0.1:60283 - 16312 "HINFO IN 2326044719089555672.3300393267380208701. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.054677372s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:41371 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000247501s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.117900616s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:42513 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.033474818s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:40448 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188161196s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:56943 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152401s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:41058 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000086901s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:34293 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0000605s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:49894 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.00006s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:49837 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001111s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:33220 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017189461s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:45579 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000277601s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:51082 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000190101s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:51519 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.026528294s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:59498 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117701s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:42474 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:60151 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001204s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:50831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001128s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	I0328 01:33:36.422632    6044 command_runner.go:130] > [INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	I0328 01:33:36.423248    6044 command_runner.go:130] > [INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0328 01:33:36.423334    6044 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0328 01:33:36.426193    6044 logs.go:123] Gathering logs for kube-scheduler [bc83a37dbd03] ...
	I0328 01:33:36.426193    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc83a37dbd03"
	I0328 01:33:36.454246    6044 command_runner.go:130] ! I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	I0328 01:33:36.454307    6044 command_runner.go:130] ! W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0328 01:33:36.456978    6044 command_runner.go:130] ! W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0328 01:33:36.456978    6044 command_runner.go:130] ! W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0328 01:33:36.457301    6044 command_runner.go:130] ! W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:33:36.457368    6044 command_runner.go:130] ! I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:33:36.460469    6044 logs.go:123] Gathering logs for kindnet [ee99098e42fc] ...
	I0328 01:33:36.460579    6044 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee99098e42fc"
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.319753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.320254       1 main.go:107] hostIP = 172.28.229.19
	I0328 01:33:36.491736    6044 command_runner.go:130] ! podIP = 172.28.229.19
	I0328 01:33:36.491736    6044 command_runner.go:130] ! I0328 01:32:22.321740       1 main.go:116] setting mtu 1500 for CNI 
	I0328 01:33:36.492649    6044 command_runner.go:130] ! I0328 01:32:22.321777       1 main.go:146] kindnetd IP family: "ipv4"
	I0328 01:33:36.492649    6044 command_runner.go:130] ! I0328 01:32:22.321799       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0328 01:33:36.492732    6044 command_runner.go:130] ! I0328 01:32:52.738929       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0328 01:33:36.492772    6044 command_runner.go:130] ! I0328 01:32:52.794200       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.492825    6044 command_runner.go:130] ! I0328 01:32:52.794320       1 main.go:227] handling current node
	I0328 01:33:36.492825    6044 command_runner.go:130] ! I0328 01:32:52.794662       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.492865    6044 command_runner.go:130] ! I0328 01:32:52.794805       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.492915    6044 command_runner.go:130] ! I0328 01:32:52.794957       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.28.230.250 Flags: [] Table: 0} 
	I0328 01:33:36.492956    6044 command_runner.go:130] ! I0328 01:32:52.795458       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.492998    6044 command_runner.go:130] ! I0328 01:32:52.795540       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:32:52.795606       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.28.224.172 Flags: [] Table: 0} 
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:33:02.803479       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493038    6044 command_runner.go:130] ! I0328 01:33:02.803569       1 main.go:227] handling current node
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803584       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803592       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803771       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493128    6044 command_runner.go:130] ! I0328 01:33:02.803938       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493197    6044 command_runner.go:130] ! I0328 01:33:12.813148       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813258       1 main.go:227] handling current node
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813273       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493231    6044 command_runner.go:130] ! I0328 01:33:12.813281       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493284    6044 command_runner.go:130] ! I0328 01:33:12.813393       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493318    6044 command_runner.go:130] ! I0328 01:33:12.813441       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829358       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829449       1 main.go:227] handling current node
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829466       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829475       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829915       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:22.829982       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845005       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845083       1 main.go:227] handling current node
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845096       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845121       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845312       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:33:36.493347    6044 command_runner.go:130] ! I0328 01:33:32.845670       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:33:39.000229    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:33:39.000229    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.000326    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.000326    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.005527    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.005527    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.005527    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.005527    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Audit-Id: 19660e92-c8d3-4a64-8bd9-49db821e51ec
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.006196    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.006196    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.008456    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86569 chars]
	I0328 01:33:39.012910    6044 system_pods.go:59] 12 kube-system pods found
	I0328 01:33:39.012910    6044 system_pods.go:61] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:33:39.012910    6044 system_pods.go:61] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:33:39.012910    6044 system_pods.go:74] duration metric: took 3.9108292s to wait for pod list to return data ...
	I0328 01:33:39.012910    6044 default_sa.go:34] waiting for default service account to be created ...
	I0328 01:33:39.013456    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/default/serviceaccounts
	I0328 01:33:39.013544    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.013544    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.013625    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.016380    6044 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0328 01:33:39.017375    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.017375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.017375    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Content-Length: 262
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.017375    6044 round_trippers.go:580]     Audit-Id: 31e7c8c6-5a9d-471c-a868-4b3dc01b7a5f
	I0328 01:33:39.017464    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.017464    6044 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"8bb5dc68-e1fd-49c8-89aa-9b79f7d72fc2","resourceVersion":"356","creationTimestamp":"2024-03-28T01:07:44Z"}}]}
	I0328 01:33:39.017763    6044 default_sa.go:45] found service account: "default"
	I0328 01:33:39.017763    6044 default_sa.go:55] duration metric: took 4.8529ms for default service account to be created ...
	I0328 01:33:39.017763    6044 system_pods.go:116] waiting for k8s-apps to be running ...
	I0328 01:33:39.017901    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/namespaces/kube-system/pods
	I0328 01:33:39.017901    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.017968    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.017968    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.023920    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.023920    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.023920    6044 round_trippers.go:580]     Audit-Id: b473a2ab-262a-4079-9b41-2e393370c4d3
	I0328 01:33:39.023920    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.024731    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.024731    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.024731    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.024731    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.026919    6044 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"coredns-76f75df574-776ph","generateName":"coredns-76f75df574-","namespace":"kube-system","uid":"dc1416cc-736d-4eab-b95d-e963572b78e3","resourceVersion":"2063","creationTimestamp":"2024-03-28T01:07:44Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"76f75df574"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-76f75df574","uid":"840f6c47-adf3-4c08-8b73-04e55f98f236","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-03-28T01:07:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"840f6c47-adf3-4c08-8b73-04e55f98f236\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86569 chars]
	I0328 01:33:39.032531    6044 system_pods.go:86] 12 kube-system pods found
	I0328 01:33:39.032531    6044 system_pods.go:89] "coredns-76f75df574-776ph" [dc1416cc-736d-4eab-b95d-e963572b78e3] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "etcd-multinode-240000" [0a33e012-ebfe-4ac4-bf0b-ffccdd7308de] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-hsnfl" [e049fea9-9620-4eb5-9eb0-056c68076331] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-jvgx2" [507e3461-4bd4-46b9-9189-606b3506a742] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kindnet-rwghf" [7c75e225-0e90-4916-bf27-a00a036e0955] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-apiserver-multinode-240000" [8b9b4cf7-40b0-4a3e-96ca-28c934f9789a] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-controller-manager-multinode-240000" [4a79ab06-2314-43bb-8e37-45b9aab24e4e] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-47rqg" [22fd5683-834d-47ae-a5b4-1ed980514e1b] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-55rch" [a96f841b-3e8f-42c1-be63-03914c0b90e8] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-proxy-t88gz" [695603ac-ab24-4206-9728-342b2af018e4] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "kube-scheduler-multinode-240000" [7670489f-fb6c-4b5f-80e8-5fe8de8d7d19] Running
	I0328 01:33:39.032531    6044 system_pods.go:89] "storage-provisioner" [3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f] Running
	I0328 01:33:39.032531    6044 system_pods.go:126] duration metric: took 14.7682ms to wait for k8s-apps to be running ...
	I0328 01:33:39.032531    6044 system_svc.go:44] waiting for kubelet service to be running ....
	I0328 01:33:39.046305    6044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:33:39.075188    6044 system_svc.go:56] duration metric: took 42.6564ms WaitForService to wait for kubelet
	I0328 01:33:39.075340    6044 kubeadm.go:576] duration metric: took 1m14.0713139s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0328 01:33:39.075340    6044 node_conditions.go:102] verifying NodePressure condition ...
	I0328 01:33:39.075505    6044 round_trippers.go:463] GET https://172.28.229.19:8443/api/v1/nodes
	I0328 01:33:39.075505    6044 round_trippers.go:469] Request Headers:
	I0328 01:33:39.075577    6044 round_trippers.go:473]     Accept: application/json, */*
	I0328 01:33:39.075577    6044 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0328 01:33:39.080842    6044 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0328 01:33:39.080842    6044 round_trippers.go:577] Response Headers:
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Cache-Control: no-cache, private
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Content-Type: application/json
	I0328 01:33:39.081011    6044 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 2e519721-36e7-4f59-8b0e-8e9119a7eec6
	I0328 01:33:39.081011    6044 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 1996e2ed-95b4-43db-a215-db716faef12b
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Date: Thu, 28 Mar 2024 01:33:39 GMT
	I0328 01:33:39.081011    6044 round_trippers.go:580]     Audit-Id: 1ab1d709-f2ac-46f2-9f18-885f95182cd9
	I0328 01:33:39.081454    6044 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2077"},"items":[{"metadata":{"name":"multinode-240000","uid":"c49881f6-08bc-4c90-a88c-f307bac3ef1c","resourceVersion":"2022","creationTimestamp":"2024-03-28T01:07:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-240000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a940f980f77c9bfc8ef93678db8c1f49c9dd79d","minikube.k8s.io/name":"multinode-240000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_03_28T01_07_32_0700","minikube.k8s.io/version":"v1.33.0-beta.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 16280 chars]
	I0328 01:33:39.082071    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082651    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082706    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0328 01:33:39.082706    6044 node_conditions.go:123] node cpu capacity is 2
	I0328 01:33:39.082706    6044 node_conditions.go:105] duration metric: took 7.3005ms to run NodePressure ...
	I0328 01:33:39.082706    6044 start.go:240] waiting for startup goroutines ...
	I0328 01:33:39.082706    6044 start.go:245] waiting for cluster config update ...
	I0328 01:33:39.082706    6044 start.go:254] writing updated cluster config ...
	I0328 01:33:39.088662    6044 out.go:177] 
	I0328 01:33:39.091921    6044 config.go:182] Loaded profile config "ha-170000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:33:39.100986    6044 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:33:39.101311    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:33:39.106290    6044 out.go:177] * Starting "multinode-240000-m02" worker node in "multinode-240000" cluster
	I0328 01:33:39.109822    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0328 01:33:39.109822    6044 cache.go:56] Caching tarball of preloaded images
	I0328 01:33:39.109822    6044 preload.go:173] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0328 01:33:39.110488    6044 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0328 01:33:39.110488    6044 profile.go:142] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-240000\config.json ...
	I0328 01:33:39.112601    6044 start.go:360] acquireMachinesLock for multinode-240000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0328 01:33:39.112601    6044 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-240000-m02"
	I0328 01:33:39.113502    6044 start.go:96] Skipping create...Using existing machine configuration
	I0328 01:33:39.113502    6044 fix.go:54] fixHost starting: m02
	I0328 01:33:39.113502    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:41.501367    6044 main.go:141] libmachine: [stdout =====>] : Off
	
	I0328 01:33:41.501764    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:41.501764    6044 fix.go:112] recreateIfNeeded on multinode-240000-m02: state=Stopped err=<nil>
	W0328 01:33:41.501764    6044 fix.go:138] unexpected machine state, will restart: <nil>
	I0328 01:33:41.505415    6044 out.go:177] * Restarting existing hyperv VM for "multinode-240000-m02" ...
	I0328 01:33:41.510146    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-240000-m02
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:44.760660    6044 main.go:141] libmachine: Waiting for host to start...
	I0328 01:33:44.760660    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:47.160144    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:33:49.931320    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:49.931416    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:50.945836    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:53.344974    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:53.345732    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:53.345732    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:33:56.068896    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:33:56.069154    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:57.075928    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:33:59.398589    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:33:59.398737    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:33:59.398737    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:34:02.061556    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:34:02.061748    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:03.067243    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:05.378987    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:34:08.059910    6044 main.go:141] libmachine: [stdout =====>] : 
	I0328 01:34:08.060961    6044 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:34:09.076183    6044 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	
	
	==> Docker <==
	Mar 28 01:33:28 multinode-240000 dockerd[1051]: 2024/03/28 01:33:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:31 multinode-240000 dockerd[1051]: 2024/03/28 01:33:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:32 multinode-240000 dockerd[1051]: 2024/03/28 01:33:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:35 multinode-240000 dockerd[1051]: 2024/03/28 01:33:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:36 multinode-240000 dockerd[1051]: 2024/03/28 01:33:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:36 multinode-240000 dockerd[1051]: 2024/03/28 01:33:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:36 multinode-240000 dockerd[1051]: 2024/03/28 01:33:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:36 multinode-240000 dockerd[1051]: 2024/03/28 01:33:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Mar 28 01:33:36 multinode-240000 dockerd[1051]: 2024/03/28 01:33:36 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dea6e77fe6072       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   57a41fbc578d5       busybox-7fdf7869d9-ct428
	e6a5a75ec447f       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   d3a9caca46521       coredns-76f75df574-776ph
	64647587ffc1f       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   821d3cf9ae1a9       storage-provisioner
	ee99098e42fc1       4950bb10b3f87                                                                                         2 minutes ago        Running             kindnet-cni               1                   347f7ad7ebaed       kindnet-rwghf
	4dcf03394ea80       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   821d3cf9ae1a9       storage-provisioner
	7c9638784c60f       a1d263b5dc5b0                                                                                         2 minutes ago        Running             kube-proxy                1                   dfd01cb54b7d8       kube-proxy-47rqg
	6539c85e1b61f       39f995c9f1996                                                                                         2 minutes ago        Running             kube-apiserver            0                   4dd7c46520744       kube-apiserver-multinode-240000
	ab4a76ecb029b       3861cfcd7c04c                                                                                         2 minutes ago        Running             etcd                      0                   8780a18ab9755       etcd-multinode-240000
	bc83a37dbd03c       8c390d98f50c0                                                                                         2 minutes ago        Running             kube-scheduler            1                   8cf9dbbfda9ea       kube-scheduler-multinode-240000
	ceaccf323deed       6052a25da3f97                                                                                         2 minutes ago        Running             kube-controller-manager   1                   3314134e34d83       kube-controller-manager-multinode-240000
	a130300bc7839       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   930fbfde452c0       busybox-7fdf7869d9-ct428
	29e516c918ef4       cbb01a7bd410d                                                                                         26 minutes ago       Exited              coredns                   0                   6b6f67390b070       coredns-76f75df574-776ph
	dc9808261b21c       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago       Exited              kindnet-cni               0                   6ae82cd0a8489       kindnet-rwghf
	bb0b3c5422645       a1d263b5dc5b0                                                                                         26 minutes ago       Exited              kube-proxy                0                   5d9ed3a20e885       kube-proxy-47rqg
	1aa05268773e4       6052a25da3f97                                                                                         27 minutes ago       Exited              kube-controller-manager   0                   763932cfdf0b0       kube-controller-manager-multinode-240000
	7061eab02790d       8c390d98f50c0                                                                                         27 minutes ago       Exited              kube-scheduler            0                   7415d077c6f81       kube-scheduler-multinode-240000
	
	
	==> coredns [29e516c918ef] <==
	[INFO] 10.244.1.2:41628 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000727s
	[INFO] 10.244.1.2:58750 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000090601s
	[INFO] 10.244.1.2:59003 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000565s
	[INFO] 10.244.1.2:44988 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000534s
	[INFO] 10.244.1.2:46242 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000553s
	[INFO] 10.244.1.2:54917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000638s
	[INFO] 10.244.1.2:39304 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177201s
	[INFO] 10.244.0.3:48823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000796s
	[INFO] 10.244.0.3:44709 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142901s
	[INFO] 10.244.0.3:48375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000774s
	[INFO] 10.244.0.3:58925 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125101s
	[INFO] 10.244.1.2:59246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001171s
	[INFO] 10.244.1.2:47730 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000697s
	[INFO] 10.244.1.2:33031 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000695s
	[INFO] 10.244.1.2:50853 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057s
	[INFO] 10.244.0.3:39682 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000390002s
	[INFO] 10.244.0.3:52761 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108301s
	[INFO] 10.244.0.3:46476 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158601s
	[INFO] 10.244.0.3:57613 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000931s
	[INFO] 10.244.1.2:43367 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233301s
	[INFO] 10.244.1.2:50120 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0002331s
	[INFO] 10.244.1.2:43779 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000821s
	[INFO] 10.244.1.2:37155 - 5 "PTR IN 1.224.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000589s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e6a5a75ec447] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 61f4d0960164fdf8d8157aaa96d041acf5b29f3c98ba802d705114162ff9f2cc889bbb973f9b8023f3112734912ee6f4eadc4faa21115183d5697de30dae3805
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56542 - 57483 "HINFO IN 863318367541877849.2825438388179145044. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.037994825s
	
	
	==> describe nodes <==
	Name:               multinode-240000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-240000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-240000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_28T01_07_32_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-240000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:34:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:07:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Mar 2024 01:32:53 +0000   Thu, 28 Mar 2024 01:32:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.229.19
	  Hostname:    multinode-240000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe98ff783f164d50926235b1a1a0c9a9
	  System UUID:                074b49af-5c50-b749-b0a9-2a3d75bf97a0
	  Boot ID:                    88b5f16c-258a-4fb6-a998-e0ffa63edff9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-ct428                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-776ph                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-240000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m23s
	  kube-system                 kindnet-rwghf                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-240000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-multinode-240000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-47rqg                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-240000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-240000 status is now: NodeReady
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m28s (x8 over 2m29s)  kubelet          Node multinode-240000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m28s (x8 over 2m29s)  kubelet          Node multinode-240000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           2m10s                  node-controller  Node multinode-240000 event: Registered Node multinode-240000 in Controller
	
	
	Name:               multinode-240000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-240000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-240000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T01_10_55_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:10:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-240000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:28:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 28 Mar 2024 01:27:18 +0000   Thu, 28 Mar 2024 01:33:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.230.250
	  Hostname:    multinode-240000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2bcbb6f523d04ea69ba7f23d0cdfec75
	  System UUID:                d499bd2a-38ff-6a40-b0a5-5534aeedd854
	  Boot ID:                    cfc1ec0e-7646-40c9-8245-9d09d77d6b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-zgwm4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-hsnfl               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-t88gz            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-240000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-240000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-240000-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m10s              node-controller  Node multinode-240000-m02 event: Registered Node multinode-240000-m02 in Controller
	  Normal  NodeNotReady             90s                node-controller  Node multinode-240000-m02 status is now: NodeNotReady
	
	
	Name:               multinode-240000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-240000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=multinode-240000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_28T01_27_31_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Mar 2024 01:27:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-240000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Mar 2024 01:28:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 28 Mar 2024 01:27:36 +0000   Thu, 28 Mar 2024 01:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.224.172
	  Hostname:    multinode-240000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 53e5a22090614654950f5f4d91307651
	  System UUID:                1b1cc332-0092-fa4b-8d09-1c599aae83ad
	  Boot ID:                    7cabd891-d8ad-4af2-8060-94ae87174528
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.0
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jvgx2       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-55rch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)      kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)      kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-240000-m03 status is now: NodeReady
	  Normal  Starting                 7m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m12s (x2 over 7m12s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s (x2 over 7m12s)  kubelet          Node multinode-240000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s (x2 over 7m12s)  kubelet          Node multinode-240000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m8s                   node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	  Normal  NodeReady                7m6s                   kubelet          Node multinode-240000-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m28s                  node-controller  Node multinode-240000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m10s                  node-controller  Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.946328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.758535] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.937420] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.347197] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar28 01:31] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.201840] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[Mar28 01:32] systemd-fstab-generator[979]: Ignoring "noauto" option for root device
	[  +0.108343] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.586323] systemd-fstab-generator[1017]: Ignoring "noauto" option for root device
	[  +0.218407] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[  +0.238441] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	[  +3.002162] systemd-fstab-generator[1230]: Ignoring "noauto" option for root device
	[  +0.206082] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.206423] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	[  +0.316656] systemd-fstab-generator[1269]: Ignoring "noauto" option for root device
	[  +0.941398] systemd-fstab-generator[1391]: Ignoring "noauto" option for root device
	[  +0.123620] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.687763] systemd-fstab-generator[1526]: Ignoring "noauto" option for root device
	[  +1.367953] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.014600] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.465273] systemd-fstab-generator[3066]: Ignoring "noauto" option for root device
	[  +7.649293] kauditd_printk_skb: 70 callbacks suppressed
	
	
	==> etcd [ab4a76ecb029] <==
	{"level":"info","ts":"2024-03-28T01:32:15.894489Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T01:32:15.894506Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-28T01:32:15.895008Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 switched to configuration voters=(9455213553573974608)"}
	{"level":"info","ts":"2024-03-28T01:32:15.895115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","added-peer-id":"8337aaa1903c5250","added-peer-peer-urls":["https://172.28.227.122:2380"]}
	{"level":"info","ts":"2024-03-28T01:32:15.895259Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d63dbc5e8f5386f","local-member-id":"8337aaa1903c5250","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:32:15.895348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-28T01:32:15.908515Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-28T01:32:15.908865Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8337aaa1903c5250","initial-advertise-peer-urls":["https://172.28.229.19:2380"],"listen-peer-urls":["https://172.28.229.19:2380"],"advertise-client-urls":["https://172.28.229.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.229.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-28T01:32:15.908914Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-28T01:32:15.908997Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.229.19:2380"}
	{"level":"info","ts":"2024-03-28T01:32:15.909011Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.229.19:2380"}
	{"level":"info","ts":"2024-03-28T01:32:17.232003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-28T01:32:17.232075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-28T01:32:17.232112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgPreVoteResp from 8337aaa1903c5250 at term 2"}
	{"level":"info","ts":"2024-03-28T01:32:17.232126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became candidate at term 3"}
	{"level":"info","ts":"2024-03-28T01:32:17.232135Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 received MsgVoteResp from 8337aaa1903c5250 at term 3"}
	{"level":"info","ts":"2024-03-28T01:32:17.232146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8337aaa1903c5250 became leader at term 3"}
	{"level":"info","ts":"2024-03-28T01:32:17.232158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8337aaa1903c5250 elected leader 8337aaa1903c5250 at term 3"}
	{"level":"info","ts":"2024-03-28T01:32:17.237341Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8337aaa1903c5250","local-member-attributes":"{Name:multinode-240000 ClientURLs:[https://172.28.229.19:2379]}","request-path":"/0/members/8337aaa1903c5250/attributes","cluster-id":"9d63dbc5e8f5386f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-28T01:32:17.237562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:32:17.239961Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-28T01:32:17.263569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-28T01:32:17.263595Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-28T01:32:17.283007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.229.19:2379"}
	{"level":"info","ts":"2024-03-28T01:32:17.301354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:34:42 up 4 min,  0 users,  load average: 0.23, 0.25, 0.10
	Linux multinode-240000 5.10.207 #1 SMP Wed Mar 27 22:02:20 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dc9808261b21] <==
	I0328 01:28:54.680547       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:29:04.687598       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:29:04.687765       1 main.go:227] handling current node
	I0328 01:29:04.687785       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:29:04.687796       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:29:04.687963       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:29:04.687979       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:29:14.698762       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:29:14.698810       1 main.go:227] handling current node
	I0328 01:29:14.698825       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:29:14.698832       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:29:14.699169       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:29:14.699203       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:29:24.717977       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:29:24.718118       1 main.go:227] handling current node
	I0328 01:29:24.718136       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:29:24.718145       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:29:24.718279       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:29:24.718311       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:29:34.724517       1 main.go:223] Handling node with IPs: map[172.28.227.122:{}]
	I0328 01:29:34.724618       1 main.go:227] handling current node
	I0328 01:29:34.724634       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:29:34.724643       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:29:34.725226       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:29:34.725416       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ee99098e42fc] <==
	I0328 01:34:02.877603       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:34:12.989367       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:34:12.989432       1 main.go:227] handling current node
	I0328 01:34:12.989449       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:34:12.989456       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:34:12.989962       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:34:12.990041       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:34:23.001654       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:34:23.001700       1 main.go:227] handling current node
	I0328 01:34:23.002242       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:34:23.002255       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:34:23.002617       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:34:23.002650       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:34:33.016121       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:34:33.016301       1 main.go:227] handling current node
	I0328 01:34:33.016337       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:34:33.016362       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:34:33.016513       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:34:33.016543       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	I0328 01:34:43.031929       1 main.go:223] Handling node with IPs: map[172.28.229.19:{}]
	I0328 01:34:43.032840       1 main.go:227] handling current node
	I0328 01:34:43.033065       1 main.go:223] Handling node with IPs: map[172.28.230.250:{}]
	I0328 01:34:43.033288       1 main.go:250] Node multinode-240000-m02 has CIDR [10.244.1.0/24] 
	I0328 01:34:43.033578       1 main.go:223] Handling node with IPs: map[172.28.224.172:{}]
	I0328 01:34:43.033845       1 main.go:250] Node multinode-240000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6539c85e1b61] <==
	I0328 01:32:19.334928       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0328 01:32:19.335653       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0328 01:32:19.499336       1 shared_informer.go:318] Caches are synced for configmaps
	I0328 01:32:19.501912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0328 01:32:19.504433       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0328 01:32:19.506496       1 aggregator.go:165] initial CRD sync complete...
	I0328 01:32:19.506538       1 autoregister_controller.go:141] Starting autoregister controller
	I0328 01:32:19.506548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0328 01:32:19.506871       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0328 01:32:19.506977       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0328 01:32:19.519086       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0328 01:32:19.542058       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0328 01:32:19.580921       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0328 01:32:19.592848       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0328 01:32:19.608262       1 cache.go:39] Caches are synced for autoregister controller
	I0328 01:32:20.302603       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0328 01:32:20.857698       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.227.122 172.28.229.19]
	I0328 01:32:20.859624       1 controller.go:624] quota admission added evaluator for: endpoints
	I0328 01:32:20.870212       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0328 01:32:22.795650       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0328 01:32:23.151124       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0328 01:32:23.177645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0328 01:32:23.338313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0328 01:32:23.353620       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0328 01:32:40.864669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.229.19]
	
	
	==> kube-controller-manager [1aa05268773e] <==
	I0328 01:11:49.352601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.382248ms"
	I0328 01:11:49.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="62.5µs"
	I0328 01:15:58.358805       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:15:58.359348       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:15:58.402286       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jvgx2"
	I0328 01:15:58.402827       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-55rch"
	I0328 01:15:58.405421       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.2.0/24"]
	I0328 01:15:59.058703       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-240000-m03"
	I0328 01:15:59.059180       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:16:20.751668       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:24:29.197407       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:24:29.203202       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:24:29.229608       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:24:29.247522       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:27:23.686830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:27:24.286010       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-240000-m03 event: Removing Node multinode-240000-m03 from Controller"
	I0328 01:27:30.358404       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:27:30.361770       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-240000-m03\" does not exist"
	I0328 01:27:30.394360       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-240000-m03" podCIDRs=["10.244.3.0/24"]
	I0328 01:27:34.288477       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-240000-m03 event: Registered Node multinode-240000-m03 in Controller"
	I0328 01:27:36.134336       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m03"
	I0328 01:29:14.344304       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:29:14.346290       1 event.go:376] "Event occurred" object="multinode-240000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m03 status is now: NodeNotReady"
	I0328 01:29:14.370766       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-55rch" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:29:14.392308       1 event.go:376] "Event occurred" object="kube-system/kindnet-jvgx2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-controller-manager [ceaccf323dee] <==
	I0328 01:32:32.651676       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0328 01:32:32.659290       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0328 01:32:32.667521       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:32:32.683826       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0328 01:32:32.683944       1 shared_informer.go:318] Caches are synced for endpoint
	I0328 01:32:32.737259       1 shared_informer.go:318] Caches are synced for resource quota
	I0328 01:32:32.742870       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0328 01:32:33.088175       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:32:33.088209       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0328 01:32:33.097231       1 shared_informer.go:318] Caches are synced for garbage collector
	I0328 01:32:53.970448       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-240000-m02"
	I0328 01:32:57.647643       1 event.go:376] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0328 01:32:57.647943       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ct428" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ct428"
	I0328 01:32:57.648069       1 event.go:376] "Event occurred" object="kube-system/coredns-76f75df574-776ph" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-76f75df574-776ph"
	I0328 01:33:12.667954       1 event.go:376] "Event occurred" object="multinode-240000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-240000-m02 status is now: NodeNotReady"
	I0328 01:33:12.686681       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-zgwm4" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:12.698519       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="13.246789ms"
	I0328 01:33:12.699114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="37.9µs"
	I0328 01:33:12.709080       1 event.go:376] "Event occurred" object="kube-system/kindnet-hsnfl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:12.733251       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-t88gz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0328 01:33:25.571898       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="20.940169ms"
	I0328 01:33:25.572013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="31.4µs"
	I0328 01:33:25.596419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="70.5µs"
	I0328 01:33:25.652921       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="18.37866ms"
	I0328 01:33:25.653855       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.9µs"
	
	
	==> kube-proxy [7c9638784c60] <==
	I0328 01:32:22.346613       1 server_others.go:72] "Using iptables proxy"
	I0328 01:32:22.432600       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.229.19"]
	I0328 01:32:22.670309       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:32:22.670342       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:32:22.670422       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:32:22.691003       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:32:22.691955       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:32:22.691998       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:32:22.703546       1 config.go:188] "Starting service config controller"
	I0328 01:32:22.706995       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:32:22.707357       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:32:22.707370       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:32:22.708174       1 config.go:315] "Starting node config controller"
	I0328 01:32:22.708184       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:32:22.807593       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:32:22.807699       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:32:22.808439       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [bb0b3c542264] <==
	I0328 01:07:46.260052       1 server_others.go:72] "Using iptables proxy"
	I0328 01:07:46.279785       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["172.28.227.122"]
	I0328 01:07:46.364307       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0328 01:07:46.364414       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0328 01:07:46.364433       1 server_others.go:168] "Using iptables Proxier"
	I0328 01:07:46.368524       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0328 01:07:46.368854       1 server.go:865] "Version info" version="v1.29.3"
	I0328 01:07:46.368909       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:07:46.370904       1 config.go:188] "Starting service config controller"
	I0328 01:07:46.382389       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0328 01:07:46.382488       1 shared_informer.go:318] Caches are synced for service config
	I0328 01:07:46.371910       1 config.go:97] "Starting endpoint slice config controller"
	I0328 01:07:46.382665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0328 01:07:46.382693       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0328 01:07:46.374155       1 config.go:315] "Starting node config controller"
	I0328 01:07:46.382861       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0328 01:07:46.382887       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7061eab02790] <==
	W0328 01:07:28.240756       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0328 01:07:28.240834       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0328 01:07:28.255074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0328 01:07:28.255356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0328 01:07:28.278207       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.278668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:28.381584       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0328 01:07:28.381627       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0328 01:07:28.514618       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0328 01:07:28.515155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0328 01:07:28.528993       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.529395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0328 01:07:28.532653       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0328 01:07:28.532704       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0328 01:07:28.584380       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0328 01:07:28.585331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0328 01:07:28.617611       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0328 01:07:28.618424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0328 01:07:28.646703       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0328 01:07:28.647128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0328 01:07:30.316754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:29:38.212199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0328 01:29:38.213339       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0328 01:29:38.213731       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0328 01:29:38.223877       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bc83a37dbd03] <==
	I0328 01:32:16.704993       1 serving.go:380] Generated self-signed cert in-memory
	W0328 01:32:19.361735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0328 01:32:19.361772       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0328 01:32:19.361786       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0328 01:32:19.361794       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0328 01:32:19.443650       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0328 01:32:19.443696       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0328 01:32:19.451824       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0328 01:32:19.452157       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0328 01:32:19.452206       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0328 01:32:19.452231       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0328 01:32:19.556393       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.399557    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume podName:dc1416cc-736d-4eab-b95d-e963572b78e3 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.399534782 +0000 UTC m=+69.958544472 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dc1416cc-736d-4eab-b95d-e963572b78e3-config-volume") pod "coredns-76f75df574-776ph" (UID: "dc1416cc-736d-4eab-b95d-e963572b78e3") : object "kube-system"/"coredns" not registered
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499389    1533 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499479    1533 projected.go:200] Error preparing data for projected volume kube-api-access-86msg for pod default/busybox-7fdf7869d9-ct428: object "default"/"kube-root-ca.crt" not registered
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.499555    1533 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg podName:82be2bd2-ca76-4804-8e23-ebd40a434863 nodeName:}" failed. No retries permitted until 2024-03-28 01:33:23.499533548 +0000 UTC m=+70.058543238 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-86msg" (UniqueName: "kubernetes.io/projected/82be2bd2-ca76-4804-8e23-ebd40a434863-kube-api-access-86msg") pod "busybox-7fdf7869d9-ct428" (UID: "82be2bd2-ca76-4804-8e23-ebd40a434863") : object "default"/"kube-root-ca.crt" not registered
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.789982    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	Mar 28 01:32:51 multinode-240000 kubelet[1533]: E0328 01:32:51.790491    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819055    1533 scope.go:117] "RemoveContainer" containerID="d02996b2d57bf7439b634e180f3f28e83a0825e92695a9ca17ecca77cbb5da1c"
	Mar 28 01:32:52 multinode-240000 kubelet[1533]: I0328 01:32:52.819508    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	Mar 28 01:32:52 multinode-240000 kubelet[1533]: E0328 01:32:52.820004    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f)\"" pod="kube-system/storage-provisioner" podUID="3f881c2f-5b9a-4dc5-bc8c-56784b9ff60f"
	Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.789452    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7fdf7869d9-ct428" podUID="82be2bd2-ca76-4804-8e23-ebd40a434863"
	Mar 28 01:32:53 multinode-240000 kubelet[1533]: E0328 01:32:53.791042    1533 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-76f75df574-776ph" podUID="dc1416cc-736d-4eab-b95d-e963572b78e3"
	Mar 28 01:32:53 multinode-240000 kubelet[1533]: I0328 01:32:53.945064    1533 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Mar 28 01:33:04 multinode-240000 kubelet[1533]: I0328 01:33:04.789137    1533 scope.go:117] "RemoveContainer" containerID="4dcf03394ea80c2d0427ea263be7c533ab3aaf86ba89e254db95cbdd1b70a343"
	Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.803616    1533 scope.go:117] "RemoveContainer" containerID="66f15076d3443d3fc3179676ba45f1cbac7cf2eb673e7741a3dddae0eb5baac8"
	Mar 28 01:33:13 multinode-240000 kubelet[1533]: E0328 01:33:13.838374    1533 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:33:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:33:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:33:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:33:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 28 01:33:13 multinode-240000 kubelet[1533]: I0328 01:33:13.850324    1533 scope.go:117] "RemoveContainer" containerID="a01212226d03a29a5f7e096880ecf627817c14801c81f452beaa1a398b97cfe3"
	Mar 28 01:34:13 multinode-240000 kubelet[1533]: E0328 01:34:13.835462    1533 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 28 01:34:13 multinode-240000 kubelet[1533]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 28 01:34:13 multinode-240000 kubelet[1533]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 28 01:34:13 multinode-240000 kubelet[1533]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 28 01:34:13 multinode-240000 kubelet[1533]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:34:31.794519    6616 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-240000 -n multinode-240000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-240000 -n multinode-240000: (13.0072041s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-240000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (407.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (312.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-905300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-905300 --driver=hyperv: exit status 1 (4m59.5981962s)

                                                
                                                
-- stdout --
	* [NoKubernetes-905300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-905300" primary control-plane node in "NoKubernetes-905300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:51:51.015053    9540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-905300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-905300 -n NoKubernetes-905300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-905300 -n NoKubernetes-905300: exit status 6 (13.1339887s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:56:50.594846    5512 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0328 01:57:03.535714    5512 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-905300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-905300" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (312.74s)

                                                
                                    
x
+
TestPause/serial/Start (474.03s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-083200 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0328 02:16:32.328265   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
pause_test.go:80: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p pause-083200 --memory=2048 --install-addons=false --wait=all --driver=hyperv: exit status 90 (7m40.9817041s)

                                                
                                                
-- stdout --
	* [pause-083200] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "pause-083200" primary control-plane node in "pause-083200" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 02:15:02.641765   13720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Mar 28 02:21:07 pause-083200 systemd[1]: Starting Docker Application Container Engine...
	Mar 28 02:21:07 pause-083200 dockerd[664]: time="2024-03-28T02:21:07.322662654Z" level=info msg="Starting up"
	Mar 28 02:21:07 pause-083200 dockerd[664]: time="2024-03-28T02:21:07.323891088Z" level=info msg="containerd not running, starting managed containerd"
	Mar 28 02:21:07 pause-083200 dockerd[664]: time="2024-03-28T02:21:07.324902417Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=670
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.367776135Z" level=info msg="starting containerd" revision=dcf2847247e18caba8dce86522029642f60fe96b version=v1.7.14
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399536037Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399593538Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399667041Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399702042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399862246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.399979749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.400456463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.400591467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.400661869Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.400691470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.400810573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.401381289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.404510878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.404625081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.404811887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.404916690Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.405038593Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.405503806Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.405675311Z" level=info msg="metadata content store policy set" policy=shared
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511564819Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511664621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511700022Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511725223Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511744624Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.511981030Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.512616749Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.512835655Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.512977259Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513018560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513036160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513073561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513094762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513185765Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513208265Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513225366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513260567Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513330669Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513374770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513424671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513458772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513475573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513505774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513521474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513534775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513548775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513563475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513579576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513594276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513619077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513632977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513651278Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513673979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513689279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513702979Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513786382Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513893485Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513913285Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.513927386Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514052789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514097891Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514118791Z" level=info msg="NRI interface is disabled by configuration."
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514557204Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514855512Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514919314Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Mar 28 02:21:07 pause-083200 dockerd[670]: time="2024-03-28T02:21:07.514966315Z" level=info msg="containerd successfully booted in 0.149136s"
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.402615763Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.432547760Z" level=info msg="Loading containers: start."
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.747567827Z" level=info msg="Loading containers: done."
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.774591108Z" level=info msg="Docker daemon" commit=8b79278 containerd-snapshotter=false storage-driver=overlay2 version=26.0.0
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.774811914Z" level=info msg="Daemon has completed initialization"
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.893385704Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 28 02:21:08 pause-083200 dockerd[664]: time="2024-03-28T02:21:08.893584609Z" level=info msg="API listen on [::]:2376"
	Mar 28 02:21:08 pause-083200 systemd[1]: Started Docker Application Container Engine.
	Mar 28 02:21:42 pause-083200 dockerd[664]: time="2024-03-28T02:21:42.296972077Z" level=info msg="Processing signal 'terminated'"
	Mar 28 02:21:42 pause-083200 dockerd[664]: time="2024-03-28T02:21:42.298765875Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Mar 28 02:21:42 pause-083200 systemd[1]: Stopping Docker Application Container Engine...
	Mar 28 02:21:42 pause-083200 dockerd[664]: time="2024-03-28T02:21:42.299846573Z" level=info msg="Daemon shutdown complete"
	Mar 28 02:21:42 pause-083200 dockerd[664]: time="2024-03-28T02:21:42.299889173Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Mar 28 02:21:42 pause-083200 dockerd[664]: time="2024-03-28T02:21:42.299903473Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Mar 28 02:21:43 pause-083200 systemd[1]: docker.service: Deactivated successfully.
	Mar 28 02:21:43 pause-083200 systemd[1]: Stopped Docker Application Container Engine.
	Mar 28 02:21:43 pause-083200 systemd[1]: Starting Docker Application Container Engine...
	Mar 28 02:21:43 pause-083200 dockerd[1020]: time="2024-03-28T02:21:43.385919745Z" level=info msg="Starting up"
	Mar 28 02:22:43 pause-083200 dockerd[1020]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Mar 28 02:22:43 pause-083200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Mar 28 02:22:43 pause-083200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Mar 28 02:22:43 pause-083200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p pause-083200 --memory=2048 --install-addons=false --wait=all --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-083200 -n pause-083200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-083200 -n pause-083200: exit status 6 (13.0488702s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 02:22:43.629330    9116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0328 02:22:56.483988    9116 status.go:417] kubeconfig endpoint: get endpoint: "pause-083200" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "pause-083200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestPause/serial/Start (474.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10800.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-608800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestKubernetesUpgrade (19m26s)
	TestNetworkPlugins (36m9s)
	TestNetworkPlugins/group/auto (7m8s)
	TestNetworkPlugins/group/auto/Start (7m8s)
	TestNetworkPlugins/group/calico (46s)
	TestNetworkPlugins/group/calico/Start (46s)
	TestNetworkPlugins/group/kindnet (4m0s)
	TestNetworkPlugins/group/kindnet/Start (4m0s)
	TestStartStop (12m57s)

                                                
                                                
goroutine 1832 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00079c680, 0xc00090fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000a14300, {0x4ec51a0, 0x2a, 0x2a}, {0x2c26359?, 0xb381af?, 0x4ee7980?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0005fe960)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0005fe960)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 14 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00045fe00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1845 [syscall, locked to thread]:
syscall.SyscallN(0xc000c65bf0?, {0xc000c65b20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3b9e1f0?, 0xc000c65b80?, 0xa8fe76?, 0x4f74de0?, 0xc000c65c08?, 0xa828db?, 0xa78c66?, 0xc002200841?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x7e0, {0xc000a08df3?, 0x20d, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006dd188?, {0xc000a08df3?, 0xabc211?, 0x400?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006dd188, {0xc000a08df3, 0x20d, 0x20d})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00212e0a8, {0xc000a08df3?, 0xc000c65d98?, 0x70?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c270, {0x3b7a840, 0xc0027fa0e0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c270}, {0x3b7a840, 0xc0027fa0e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b7a980, 0xc00269c270})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c270?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c270}, {0x3b7a900, 0xc00212e0a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0027f8060?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1844
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 807 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021c6000, 0xc002ae0780)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 378
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 54 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 27
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 1570 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0021ef040, {0x2bcba4f?, 0xbc7613?}, 0x362f9d0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0021ef040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0021ef040, 0x362f7f8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1816 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7fff4c5b4de0?, {0xc002291bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x744, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc002cd6000)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c34420)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c34420)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a73a00, 0xc000c34420)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000a73a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000a73a00, 0xc00269c2d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1604
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1789 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c34420, 0xc0027f82a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1816
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1794 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a72680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a72680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a72680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a72680, 0xc000a7a5c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 193 [IO wait, 169 minutes]:
internal/poll.runtime_pollWait(0x1ffea934620, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xa8fe76?, 0x4f74de0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000a6f420, 0xc000befbb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000a6f408, 0x3f0, {0xc00207c000?, 0x0?, 0x0?}, 0xc000700808?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000a6f408, 0xc000befd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000a6f408)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc0009478a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0009478a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000a100f0, {0x3b92050, 0xc0009478a0})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000a100f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0e6e4?, 0xc00079c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 191
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1759 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c342c0, 0xc000a83f20)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1756
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1756 [syscall, 7 minutes, locked to thread]:
syscall.SyscallN(0x7fff4c5b4de0?, {0xc0026ddbd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x708, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00225c660)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000c342c0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000c342c0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a72d00, 0xc000c342c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000a72d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000a72d00, 0xc00269c000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1552
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1847 [select]:
os/exec.(*Cmd).watchCtx(0xc002314160, 0xc000055740)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1844
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1846 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc000c51b20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xa82d59?, 0xc000c51b80?, 0xa8fe76?, 0x4f74de0?, 0xc000c51c08?, 0xa82a45?, 0x1ffe4f30108?, 0xc000749c67?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x398, {0xc000c15cb0?, 0x350, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0006dd908?, {0xc000c15cb0?, 0xabc25e?, 0x2000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0006dd908, {0xc000c15cb0, 0x350, 0x350})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00212e0c0, {0xc000c15cb0?, 0xc000605880?, 0x1000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c2a0, {0x3b7a840, 0xc002200450})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c2a0}, {0x3b7a840, 0xc002200450}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000c51e78?, {0x3b7a980, 0xc00269c2a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c2a0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c2a0}, {0x3b7a900, 0xc00212e0c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002444c60?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1844
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 462 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00206cde0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 467
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1797 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a72ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a72ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a72ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a72ea0, 0xc000a7a680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1844 [syscall, locked to thread]:
syscall.SyscallN(0x7fff4c5b4de0?, {0xc0024b1bd0?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x320, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00225ccf0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002314160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002314160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0021ee820, 0xc002314160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0021ee820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0021ee820, 0xc00269c180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1607
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1604 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0026009c0, {0x2bcba54?, 0x3b748c0?}, 0xc00269c2d0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026009c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc0026009c0, 0xc0006c2700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1588 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a72820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a72820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000a72820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000a72820, 0xc002328080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 477 [chan send, 154 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021c69a0, 0xc00230a060)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 476
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1606 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002600d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002600d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002600d00, 0xc0006c2880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1607 [chan receive]:
testing.(*T).Run(0xc002600ea0, {0x2bcba54?, 0x3b748c0?}, 0xc00269c180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc002600ea0, 0xc0006c2980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1795 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a729c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a729c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a729c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a729c0, 0xc000a7a600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 486 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc00232c490, 0x37)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x26e5ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00206ccc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00232c4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000286010, {0x3b7bc80, 0xc0023a53e0}, 0x1, 0xc000a82000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000286010, 0x3b9aca00, 0x0, 0x1, 0xc000a82000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 463
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1760 [syscall, locked to thread]:
syscall.SyscallN(0xc0022c9b10?, {0xc0022c9b20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x1ffffffffffffff?, 0xc0022c9b80?, 0xa8fe76?, 0x4f74de0?, 0xc0022c9c08?, 0xa828db?, 0xa78c66?, 0x8041?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x788, {0xc0020c9a35?, 0x5cb, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00207aa08?, {0xc0020c9a35?, 0xabc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00207aa08, {0xc0020c9a35, 0x5cb, 0x5cb})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007a2490, {0xc0020c9a35?, 0xc0022c9d98?, 0x234?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c390, {0x3b7a840, 0xc0027fa070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c390}, {0x3b7a840, 0xc0027fa070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b7a980, 0xc00269c390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c390?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c390}, {0x3b7a900, 0xc0007a2490}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000054f00?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1816
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1605 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002600b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002600b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002600b60, 0xc0006c2800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1713 [chan receive, 14 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000a72340, 0x362f9d0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1570
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1757 [syscall, locked to thread]:
syscall.SyscallN(0x2d6e6f6974617269?, {0xc0022cbb20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002d33220?, 0xc0022cbb80?, 0xa8fe76?, 0x4f74de0?, 0xc0022cbc08?, 0xa82a45?, 0x1ffe4f30108?, 0xaf8d4d?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6e4, {0xc000902260?, 0x5a0, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022f8f08?, {0xc000902260?, 0xabc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022f8f08, {0xc000902260, 0x5a0, 0x5a0})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007a2388, {0xc000902260?, 0xc0022cbd98?, 0x22b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c0f0, {0x3b7a840, 0xc0027fa058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c0f0}, {0x3b7a840, 0xc0027fa058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b7a980, 0xc00269c0f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c0f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c0f0}, {0x3b7a900, 0xc0007a2388}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002444180?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1756
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 488 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 487
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 487 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3b9e500, 0xc000a82000}, 0xc0024adf50, 0xc0024adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3b9e500, 0xc000a82000}, 0x90?, 0xc0024adf50, 0xc0024adf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3b9e500?, 0xc000a82000?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0024adfd0?, 0xc0e6e4?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 463
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 463 [chan receive, 154 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00232c4c0, 0xc000a82000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 467
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1798 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a731e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a731e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a731e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a731e0, 0xc000a7a700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1799 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a73520)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a73520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a73520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a73520, 0xc000a7a780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1796 [chan receive, 14 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000a72b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000a72b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000a72b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000a72b60, 0xc000a7a640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1790 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x0?, {0xc002931b20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc000002135?, 0xc002931b80?, 0xa8fe76?, 0x4f74de0?, 0xc002931c08?, 0xa828db?, 0xa78c66?, 0xc000c10041?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x648, {0xc0009f7a15?, 0x5eb, 0xc0009f7800?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002108288?, {0xc0009f7a15?, 0xabc25e?, 0x800?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002108288, {0xc0009f7a15, 0x5eb, 0x5eb})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0027fa0d0, {0xc0009f7a15?, 0xc002931d98?, 0x215?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00219fda0, {0x3b7a840, 0xc00212e030})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00219fda0}, {0x3b7a840, 0xc00212e030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b7a980, 0xc00219fda0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00219fda0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00219fda0}, {0x3b7a900, 0xc0027fa0d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0024440c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1574
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1791 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x40?, {0xc00239db20?, 0xa97f45?, 0x4f74de0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xab1a41?, 0xc00239db80?, 0xa8fe76?, 0x4f74de0?, 0xc00239dc08?, 0xa82a45?, 0x1ffe4f30108?, 0x77?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x674, {0xc000a5a254?, 0x1dac, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc002108788?, {0xc000a5a254?, 0xabc25e?, 0x4000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc002108788, {0xc000a5a254, 0x1dac, 0x1dac})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0027fa0f0, {0xc000a5a254?, 0xc00239dd98?, 0x1e61?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00219fdd0, {0x3b7a840, 0xc002200508})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00219fdd0}, {0x3b7a840, 0xc002200508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3b7a980, 0xc00219fdd0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00219fdd0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00219fdd0}, {0x3b7a900, 0xc0027fa0f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002445320?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1574
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1603 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002600820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002600820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002600820, 0xc0006c2680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1602 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc002600680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc002600680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc002600680, 0xc0006c2580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1792 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc0021c6160, 0xc0027f8c60)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1574
	/usr/local/go/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1553 [chan receive, 36 minutes]:
testing.(*testContext).waitParallel(0xc0007aafa0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0026001a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0026001a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0026001a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0026001a0, 0xc0006c2500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1526 [chan receive, 36 minutes]:
testing.(*T).Run(0xc0021ee680, {0x2bcba4f?, 0xaef56d?}, 0xc00095ab58)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0021ee680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0021ee680, 0x362f7b0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1758 [syscall, locked to thread]:
syscall.SyscallN(0xc0020e9b10?, {0xc0020e9b20?, 0xa97f45?, 0x4ef4840?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x100000000a82db9?, 0xc0020e9b80?, 0xa8fe76?, 0x4f74de0?, 0xc0020e9c08?, 0xa82a45?, 0x0?, 0x10000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x76c, {0xc00224997b?, 0x685, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc0022f9688?, {0xc00224997b?, 0xabc25e?, 0x10000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0022f9688, {0xc00224997b, 0x685, 0x685})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007a23c0, {0xc00224997b?, 0x1ffea6937f8?, 0x7e60?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c120, {0x3b7a840, 0xc0022002f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c120}, {0x3b7a840, 0xc0022002f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0020e9e78?, {0x3b7a980, 0xc00269c120})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c120?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c120}, {0x3b7a900, 0xc0007a23c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000a83d40?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1756
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1552 [chan receive, 7 minutes]:
testing.(*T).Run(0xc002600000, {0x2bcba54?, 0x3b748c0?}, 0xc00269c000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc002600000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc002600000, 0xc0006c2180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1576
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1788 [syscall, locked to thread]:
syscall.SyscallN(0xc000c13b10?, {0xc000c13b20?, 0xa97f45?, 0x4ef4840?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x10000c000bd1077?, 0xc000c13b80?, 0xa8fe76?, 0x4f74de0?, 0xc000c13c08?, 0xa82a45?, 0x0?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x75c, {0xc0020da1b9?, 0x3e47, 0xb342bf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc00207b188?, {0xc0020da1b9?, 0xc000c13c50?, 0x8000?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc00207b188, {0xc0020da1b9, 0x3e47, 0x3e47})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007a24a8, {0xc0020da1b9?, 0x0?, 0x3e5b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00269c3c0, {0x3b7a840, 0xc0000a6e38})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3b7a980, 0xc00269c3c0}, {0x3b7a840, 0xc0000a6e38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x2bccf3b?, {0x3b7a980, 0xc00269c3c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4e79aa0?, {0x3b7a980?, 0xc00269c3c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3b7a980, 0xc00269c3c0}, {0x3b7a900, 0xc0007a24a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x362f7c8?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1816
	/usr/local/go/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 1574 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7fff4c5b4de0?, {0xc000c21798?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x62c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00063be60)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0021c6160)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0021c6160)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0021ef6c0, 0xc0021c6160)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc0021ef6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:275 +0x1445
testing.tRunner(0xc0021ef6c0, 0x362f778)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1576 [chan receive, 36 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0021efa00, 0xc00095ab58)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1526
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    

Test pass (148/193)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.23
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.54
9 TestDownloadOnly/v1.20.0/DeleteAll 1.46
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.43
12 TestDownloadOnly/v1.29.3/json-events 12.03
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.29
18 TestDownloadOnly/v1.29.3/DeleteAll 1.49
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 1.48
21 TestDownloadOnly/v1.30.0-beta.0/json-events 11.79
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.31
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 1.33
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 1.25
30 TestBinaryMirror 7.71
31 TestOffline 270.32
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.31
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.31
37 TestCertOptions 533.72
38 TestCertExpiration 1037.46
39 TestDockerFlags 659.2
40 TestForceSystemdFlag 588.05
41 TestForceSystemdEnv 685
48 TestErrorSpam/start 18.59
49 TestErrorSpam/status 40.36
50 TestErrorSpam/pause 24.66
51 TestErrorSpam/unpause 24.83
52 TestErrorSpam/stop 65.65
55 TestFunctional/serial/CopySyncFile 0.04
56 TestFunctional/serial/StartWithProxy 256.43
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 135.02
59 TestFunctional/serial/KubeContext 0.14
60 TestFunctional/serial/KubectlGetPods 0.26
63 TestFunctional/serial/CacheCmd/cache/add_remote 28.56
64 TestFunctional/serial/CacheCmd/cache/add_local 11.15
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
66 TestFunctional/serial/CacheCmd/cache/list 0.28
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 10.28
68 TestFunctional/serial/CacheCmd/cache/cache_reload 39.12
69 TestFunctional/serial/CacheCmd/cache/delete 0.57
70 TestFunctional/serial/MinikubeKubectlCmd 0.51
72 TestFunctional/serial/ExtraConfig 131.4
73 TestFunctional/serial/ComponentHealth 0.2
74 TestFunctional/serial/LogsCmd 9.21
75 TestFunctional/serial/LogsFileCmd 11.61
76 TestFunctional/serial/InvalidService 22.76
82 TestFunctional/parallel/StatusCmd 43.72
86 TestFunctional/parallel/ServiceCmdConnect 29.13
87 TestFunctional/parallel/AddonsCmd 0.89
88 TestFunctional/parallel/PersistentVolumeClaim 42.7
90 TestFunctional/parallel/SSHCmd 24.55
91 TestFunctional/parallel/CpCmd 67.57
92 TestFunctional/parallel/MySQL 67.47
93 TestFunctional/parallel/FileSync 11.69
94 TestFunctional/parallel/CertSync 72.93
98 TestFunctional/parallel/NodeLabels 0.22
100 TestFunctional/parallel/NonActiveRuntimeDisabled 12.76
102 TestFunctional/parallel/License 3.67
103 TestFunctional/parallel/Version/short 0.29
104 TestFunctional/parallel/Version/components 10.02
105 TestFunctional/parallel/ImageCommands/ImageListShort 8.08
106 TestFunctional/parallel/ImageCommands/ImageListTable 8.13
107 TestFunctional/parallel/ImageCommands/ImageListJson 7.88
108 TestFunctional/parallel/ImageCommands/ImageListYaml 8.07
109 TestFunctional/parallel/ImageCommands/ImageBuild 28.48
110 TestFunctional/parallel/ImageCommands/Setup 4.72
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 26.73
112 TestFunctional/parallel/DockerEnv/powershell 51.62
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 23.79
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 31.83
115 TestFunctional/parallel/ServiceCmd/DeployApp 18.52
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.95
118 TestFunctional/parallel/ServiceCmd/List 15.44
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.85
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 10.78
123 TestFunctional/parallel/ServiceCmd/JSONOutput 14.25
124 TestFunctional/parallel/ImageCommands/ImageRemove 16.76
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 19.44
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 11.57
135 TestFunctional/parallel/ProfileCmd/profile_not_create 12.5
137 TestFunctional/parallel/ProfileCmd/profile_list 12.07
138 TestFunctional/parallel/ProfileCmd/profile_json_output 11.44
139 TestFunctional/parallel/UpdateContextCmd/no_changes 2.73
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.61
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.75
142 TestFunctional/delete_addon-resizer_images 0.51
143 TestFunctional/delete_my-image_image 0.19
144 TestFunctional/delete_minikube_cached_images 0.2
148 TestMultiControlPlane/serial/StartCluster 871.71
149 TestMultiControlPlane/serial/DeployApp 12.62
151 TestMultiControlPlane/serial/AddWorkerNode 268.3
152 TestMultiControlPlane/serial/NodeLabels 0.21
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 30.52
157 TestImageBuild/serial/Setup 209.75
158 TestImageBuild/serial/NormalBuild 10.19
159 TestImageBuild/serial/BuildWithBuildArg 9.59
160 TestImageBuild/serial/BuildWithDockerIgnore 8.23
161 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.12
165 TestJSONOutput/start/Command 252.84
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 8.44
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 8.16
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 36.73
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 1.61
193 TestMainNoArgs 0.3
194 TestMinikubeProfile 563.32
197 TestMountStart/serial/StartWithMountFirst 165.47
198 TestMountStart/serial/VerifyMountFirst 10.1
199 TestMountStart/serial/StartWithMountSecond 166.58
200 TestMountStart/serial/VerifyMountSecond 9.98
201 TestMountStart/serial/DeleteFirst 29.58
202 TestMountStart/serial/VerifyMountPostDelete 10.12
203 TestMountStart/serial/Stop 28.01
204 TestMountStart/serial/RestartStopped 124.84
205 TestMountStart/serial/VerifyMountPostStop 10.06
208 TestMultiNode/serial/FreshStart2Nodes 453.54
209 TestMultiNode/serial/DeployApp2Nodes 9.5
211 TestMultiNode/serial/AddNode 246.11
212 TestMultiNode/serial/MultiNodeLabels 0.2
213 TestMultiNode/serial/ProfileList 12.78
214 TestMultiNode/serial/CopyFile 385.92
215 TestMultiNode/serial/StopNode 82.05
216 TestMultiNode/serial/StartAfterStop 194.13
221 TestPreload 523.6
222 TestScheduledStopWindows 347.8
227 TestRunningBinaryUpgrade 1154.13
232 TestNoKubernetes/serial/StartNoK8sWithVersion 0.44
245 TestStoppedBinaryUpgrade/Setup 0.66
246 TestStoppedBinaryUpgrade/Upgrade 907.09
258 TestStoppedBinaryUpgrade/MinikubeLogs 9.94
x
+
TestDownloadOnly/v1.20.0/json-events (19.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-439300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-439300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (19.2248319s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-439300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-439300: exit status 85 (538.5752ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-439300 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:27 UTC |          |
	|         | -p download-only-439300        |                      |                   |                |                     |          |
	|         | --force --alsologtostderr      |                      |                   |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |          |
	|         | --container-runtime=docker     |                      |                   |                |                     |          |
	|         | --driver=hyperv                |                      |                   |                |                     |          |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:27:59
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:27:59.339573    7292 out.go:291] Setting OutFile to fd 612 ...
	I0327 23:27:59.340275    7292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:27:59.340275    7292 out.go:304] Setting ErrFile to fd 616...
	I0327 23:27:59.340275    7292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 23:27:59.358964    7292 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0327 23:27:59.371130    7292 out.go:298] Setting JSON to true
	I0327 23:27:59.374285    7292 start.go:129] hostinfo: {"hostname":"minikube6","uptime":4740,"bootTime":1711577338,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:27:59.374285    7292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:27:59.382088    7292 out.go:97] [download-only-439300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:27:59.385906    7292 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:27:59.382601    7292 notify.go:220] Checking for updates...
	W0327 23:27:59.382601    7292 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0327 23:27:59.390708    7292 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:27:59.397117    7292 out.go:169] MINIKUBE_LOCATION=18485
	I0327 23:27:59.399676    7292 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0327 23:27:59.415036    7292 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:27:59.416093    7292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:28:05.521704    7292 out.go:97] Using the hyperv driver based on user configuration
	I0327 23:28:05.521918    7292 start.go:297] selected driver: hyperv
	I0327 23:28:05.521975    7292 start.go:901] validating driver "hyperv" against <nil>
	I0327 23:28:05.522317    7292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:28:05.577010    7292 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0327 23:28:05.578028    7292 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:28:05.579091    7292 cni.go:84] Creating CNI manager for ""
	I0327 23:28:05.579091    7292 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 23:28:05.579091    7292 start.go:340] cluster config:
	{Name:download-only-439300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-439300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:28:05.579770    7292 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:28:05.587391    7292 out.go:97] Downloading VM boot image ...
	I0327 23:28:05.587391    7292 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1711559712-18485-amd64.iso
	I0327 23:28:10.704418    7292 out.go:97] Starting "download-only-439300" primary control-plane node in "download-only-439300" cluster
	I0327 23:28:10.704418    7292 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 23:28:10.745623    7292 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0327 23:28:10.745727    7292 cache.go:56] Caching tarball of preloaded images
	I0327 23:28:10.745998    7292 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 23:28:10.750540    7292 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 23:28:10.750616    7292 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0327 23:28:10.823038    7292 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-439300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-439300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:28:18.577378    6228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4600034s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-439300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-439300: (1.4342222s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (12.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-277500 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-277500 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=hyperv: (12.0296801s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (12.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-277500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-277500: exit status 85 (285.8046ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-439300 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:27 UTC |                     |
	|         | -p download-only-439300        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| delete  | -p download-only-439300        | download-only-439300 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| start   | -o=json --download-only        | download-only-277500 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC |                     |
	|         | -p download-only-277500        |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |                   |                |                     |                     |
	|         | --container-runtime=docker     |                      |                   |                |                     |                     |
	|         | --driver=hyperv                |                      |                   |                |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:28:22
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:28:22.099121    6044 out.go:291] Setting OutFile to fd 716 ...
	I0327 23:28:22.099121    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:28:22.099121    6044 out.go:304] Setting ErrFile to fd 720...
	I0327 23:28:22.099121    6044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:28:22.123131    6044 out.go:298] Setting JSON to true
	I0327 23:28:22.127122    6044 start.go:129] hostinfo: {"hostname":"minikube6","uptime":4763,"bootTime":1711577338,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:28:22.127122    6044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:28:22.133131    6044 out.go:97] [download-only-277500] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:28:22.133131    6044 notify.go:220] Checking for updates...
	I0327 23:28:22.137142    6044 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:28:22.140125    6044 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:28:22.145132    6044 out.go:169] MINIKUBE_LOCATION=18485
	I0327 23:28:22.147127    6044 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0327 23:28:22.152127    6044 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:28:22.153161    6044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:28:28.052824    6044 out.go:97] Using the hyperv driver based on user configuration
	I0327 23:28:28.053836    6044 start.go:297] selected driver: hyperv
	I0327 23:28:28.053935    6044 start.go:901] validating driver "hyperv" against <nil>
	I0327 23:28:28.054071    6044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:28:28.105691    6044 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0327 23:28:28.107661    6044 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:28:28.107863    6044 cni.go:84] Creating CNI manager for ""
	I0327 23:28:28.108102    6044 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:28:28.108102    6044 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:28:28.108457    6044 start.go:340] cluster config:
	{Name:download-only-277500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-277500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 23:28:28.108777    6044 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:28:28.112266    6044 out.go:97] Starting "download-only-277500" primary control-plane node in "download-only-277500" cluster
	I0327 23:28:28.112266    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 23:28:28.149902    6044 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0327 23:28:28.149902    6044 cache.go:56] Caching tarball of preloaded images
	I0327 23:28:28.151051    6044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 23:28:28.156844    6044 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 23:28:28.156844    6044 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0327 23:28:28.220941    6044 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4?checksum=md5:2fedab548578a1509c0f422889c3109c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4
	I0327 23:28:31.714955    6044 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	I0327 23:28:31.715553    6044 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-277500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-277500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:28:34.044887    3820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (1.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.4925695s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (1.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-277500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-277500: (1.4806522s)
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (1.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (11.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-485100 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-485100 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=hyperv: (11.7855117s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (11.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-485100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-485100: exit status 85 (305.5497ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-439300 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:27 UTC |                     |
	|         | -p download-only-439300             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| delete  | -p download-only-439300             | download-only-439300 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| start   | -o=json --download-only             | download-only-277500 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC |                     |
	|         | -p download-only-277500             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| delete  | -p download-only-277500             | download-only-277500 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC | 27 Mar 24 23:28 UTC |
	| start   | -o=json --download-only             | download-only-485100 | minikube6\jenkins | v1.33.0-beta.0 | 27 Mar 24 23:28 UTC |                     |
	|         | -p download-only-485100             |                      |                   |                |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |                   |                |                     |                     |
	|         | --container-runtime=docker          |                      |                   |                |                     |                     |
	|         | --driver=hyperv                     |                      |                   |                |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 23:28:37
	Running on machine: minikube6
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 23:28:37.384285    4304 out.go:291] Setting OutFile to fd 792 ...
	I0327 23:28:37.384971    4304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:28:37.384971    4304 out.go:304] Setting ErrFile to fd 796...
	I0327 23:28:37.384971    4304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:28:37.408987    4304 out.go:298] Setting JSON to true
	I0327 23:28:37.412861    4304 start.go:129] hostinfo: {"hostname":"minikube6","uptime":4778,"bootTime":1711577338,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:28:37.413885    4304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:28:37.419772    4304 out.go:97] [download-only-485100] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:28:37.420293    4304 notify.go:220] Checking for updates...
	I0327 23:28:37.422655    4304 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:28:37.425219    4304 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:28:37.427898    4304 out.go:169] MINIKUBE_LOCATION=18485
	I0327 23:28:37.430891    4304 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0327 23:28:37.436697    4304 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 23:28:37.437834    4304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 23:28:43.408697    4304 out.go:97] Using the hyperv driver based on user configuration
	I0327 23:28:43.409061    4304 start.go:297] selected driver: hyperv
	I0327 23:28:43.409172    4304 start.go:901] validating driver "hyperv" against <nil>
	I0327 23:28:43.409511    4304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 23:28:43.462124    4304 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0327 23:28:43.462945    4304 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 23:28:43.463485    4304 cni.go:84] Creating CNI manager for ""
	I0327 23:28:43.463637    4304 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 23:28:43.463671    4304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 23:28:43.463671    4304 start.go:340] cluster config:
	{Name:download-only-485100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-485100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0327 23:28:43.463671    4304 iso.go:125] acquiring lock: {Name:mk879943e10653d47fd8ae811a43a2f6cff06f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 23:28:43.467159    4304 out.go:97] Starting "download-only-485100" primary control-plane node in "download-only-485100" cluster
	I0327 23:28:43.467159    4304 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 23:28:43.509759    4304 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0327 23:28:43.509759    4304 cache.go:56] Caching tarball of preloaded images
	I0327 23:28:43.510030    4304 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 23:28:43.513074    4304 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 23:28:43.513157    4304 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0327 23:28:43.578098    4304 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:d024b8f2a881a92d6d422e5948616edf -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0327 23:28:46.830202    4304 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0327 23:28:46.830593    4304 preload.go:255] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-485100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-485100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:28:49.100473    7240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3315123s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-485100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-485100: (1.2487406s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (1.25s)

                                                
                                    
x
+
TestBinaryMirror (7.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-279000 --alsologtostderr --binary-mirror http://127.0.0.1:58180 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-279000 --alsologtostderr --binary-mirror http://127.0.0.1:58180 --driver=hyperv: (6.7454234s)
helpers_test.go:175: Cleaning up "binary-mirror-279000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-279000
--- PASS: TestBinaryMirror (7.71s)

                                                
                                    
x
+
TestOffline (270.32s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-905300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-905300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m48.2426644s)
helpers_test.go:175: Cleaning up "offline-docker-905300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-905300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-905300: (42.0710693s)
--- PASS: TestOffline (270.32s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-120100
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-120100: exit status 85 (304.1654ms)

                                                
                                                
-- stdout --
	* Profile "addons-120100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-120100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:29:04.081759    1304 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-120100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-120100: exit status 85 (314.481ms)

                                                
                                                
-- stdout --
	* Profile "addons-120100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-120100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:29:04.082763    5048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.31s)

                                                
                                    
x
+
TestCertOptions (533.72s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-040600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-040600 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m48.3201311s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-040600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-040600 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.8083203s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-040600 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-040600 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-040600 -- "sudo cat /etc/kubernetes/admin.conf": (10.4101164s)
helpers_test.go:175: Cleaning up "cert-options-040600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-040600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-040600: (44.0293059s)
--- PASS: TestCertOptions (533.72s)

                                                
                                    
x
+
TestCertExpiration (1037.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-320900 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-320900 --memory=2048 --cert-expiration=3m --driver=hyperv: (7m58.533487s)
E0328 02:13:29.057345   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-320900 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-320900 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m30.2949571s)
helpers_test.go:175: Cleaning up "cert-expiration-320900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-320900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-320900: (48.6254818s)
--- PASS: TestCertExpiration (1037.46s)

                                                
                                    
x
+
TestDockerFlags (659.2s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-698100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0328 01:58:29.050753   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 01:59:52.313832   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-698100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (9m53.9947271s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-698100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-698100 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.6228908s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-698100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-698100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.7665531s)
helpers_test.go:175: Cleaning up "docker-flags-698100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-698100
E0328 02:08:29.058772   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-698100: (43.8189497s)
--- PASS: TestDockerFlags (659.20s)

                                                
                                    
x
+
TestForceSystemdFlag (588.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-934100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-934100 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (8m49.2018018s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-934100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-934100 ssh "docker info --format {{.CgroupDriver}}": (10.7044952s)
helpers_test.go:175: Cleaning up "force-systemd-flag-934100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-934100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-934100: (48.1403327s)
--- PASS: TestForceSystemdFlag (588.05s)

                                                
                                    
x
+
TestForceSystemdEnv (685s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-153500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0328 01:53:29.041715   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-153500 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (10m31.8305395s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-153500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-153500 ssh "docker info --format {{.CgroupDriver}}": (10.8990111s)
helpers_test.go:175: Cleaning up "force-systemd-env-153500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-153500
E0328 02:03:29.045521   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-153500: (42.269941s)
--- PASS: TestForceSystemdEnv (685.00s)

                                                
                                    
x
+
TestErrorSpam/start (18.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run: (6.0768005s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run: (6.2238928s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 start --dry-run: (6.2837478s)
--- PASS: TestErrorSpam/start (18.59s)

                                                
                                    
x
+
TestErrorSpam/status (40.36s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status: (13.6138068s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status: (13.3187339s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 status: (13.4206429s)
--- PASS: TestErrorSpam/status (40.36s)

                                                
                                    
x
+
TestErrorSpam/pause (24.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause: (8.3773583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause: (8.1567381s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 pause: (8.1234222s)
--- PASS: TestErrorSpam/pause (24.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (24.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause: (8.3627989s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause: (8.2601941s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 unpause: (8.1992505s)
--- PASS: TestErrorSpam/unpause (24.83s)

                                                
                                    
x
+
TestErrorSpam/stop (65.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop: (42.1152496s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop: (11.8019861s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-199000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-199000 stop: (11.7262716s)
--- PASS: TestErrorSpam/stop (65.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\10460\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (256.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-848700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-848700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (4m16.4080339s)
--- PASS: TestFunctional/serial/StartWithProxy (256.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (135.02s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-848700 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-848700 --alsologtostderr -v=8: (2m15.0192145s)
functional_test.go:659: soft start took 2m15.0199717s for "functional-848700" cluster.
--- PASS: TestFunctional/serial/SoftStart (135.02s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-848700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (28.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:3.1: (9.6206074s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:3.3: (9.2366819s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cache add registry.k8s.io/pause:latest: (9.6999373s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (28.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (11.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-848700 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local443778838\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-848700 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local443778838\001: (2.0265887s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache add minikube-local-cache-test:functional-848700
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cache add minikube-local-cache-test:functional-848700: (8.5864656s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache delete minikube-local-cache-test:functional-848700
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-848700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (11.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl images: (10.2770213s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (10.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (39.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh sudo docker rmi registry.k8s.io/pause:latest: (10.0919491s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (10.2305991s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:48:19.221507   13184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cache reload: (8.7038224s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (10.090504s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (39.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.57s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 kubectl -- --context functional-848700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (131.4s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-848700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-848700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m11.3980259s)
functional_test.go:757: restart took 2m11.3986777s for "functional-848700" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (131.40s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-848700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (9.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 logs: (9.2064423s)
--- PASS: TestFunctional/serial/LogsCmd (9.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (11.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1257008325\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1257008325\001\logs.txt: (11.6019405s)
--- PASS: TestFunctional/serial/LogsFileCmd (11.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (22.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-848700 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-848700
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-848700: exit status 115 (18.1560284s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.28.236.250:31490 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:52:01.729356    8708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-848700 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-848700 delete -f testdata\invalidsvc.yaml: (1.1466004s)
--- PASS: TestFunctional/serial/InvalidService (22.76s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 status: (14.708905s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.8946297s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 status -o json: (14.1193053s)
--- PASS: TestFunctional/parallel/StatusCmd (43.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (29.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-848700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-848700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-9xxl4" [74593e5e-ab24-431e-b47d-d5ff3e351a72] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-9xxl4" [74593e5e-ab24-431e-b47d-d5ff3e351a72] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0111194s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 service hello-node-connect --url: (19.4039951s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.28.236.250:31406
functional_test.go:1671: http://172.28.236.250:31406: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-9xxl4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.28.236.250:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.28.236.250:31406
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (29.13s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b22b2f6c-1e15-4539-9cec-25649ec63e34] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.013912s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-848700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-848700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-848700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-848700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9de29062-9e24-4f73-952f-2a41a87b64ad] Pending
helpers_test.go:344: "sp-pod" [9de29062-9e24-4f73-952f-2a41a87b64ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9de29062-9e24-4f73-952f-2a41a87b64ad] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.0162175s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-848700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-848700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-848700 delete -f testdata/storage-provisioner/pod.yaml: (2.1984415s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-848700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d444b577-fed4-44f2-8fb6-36fb61893f1d] Pending
helpers_test.go:344: "sp-pod" [d444b577-fed4-44f2-8fb6-36fb61893f1d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d444b577-fed4-44f2-8fb6-36fb61893f1d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0180416s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-848700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (24.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "echo hello": (11.7588386s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "cat /etc/hostname": (12.7932904s)
--- PASS: TestFunctional/parallel/SSHCmd (24.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (67.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.8329102s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /home/docker/cp-test.txt": (12.1486758s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cp functional-848700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd812063501\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cp functional-848700:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd812063501\001\cp-test.txt: (11.331131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /home/docker/cp-test.txt": (12.0224158s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (9.5085296s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh -n functional-848700 "sudo cat /tmp/does/not/exist/cp-test.txt": (12.7098252s)
--- PASS: TestFunctional/parallel/CpCmd (67.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-848700 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-q5cv4" [44dddbc0-9476-452f-8289-50ad2662a8e4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-q5cv4" [44dddbc0-9476-452f-8289-50ad2662a8e4] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 52.0153767s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;": exit status 1 (336.0173ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;": exit status 1 (342.0894ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;": exit status 1 (373.4893ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;": exit status 1 (369.4206ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;": exit status 1 (329.2738ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-848700 exec mysql-859648c796-q5cv4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.47s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10460/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/test/nested/copy/10460/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/test/nested/copy/10460/hosts": (11.6870682s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (72.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/10460.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/10460.pem": (12.7798893s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /usr/share/ca-certificates/10460.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /usr/share/ca-certificates/10460.pem": (11.5407119s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.3806155s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/104602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/104602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/104602.pem": (12.2082219s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/104602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /usr/share/ca-certificates/104602.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /usr/share/ca-certificates/104602.pem": (12.0187764s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (12.9931375s)
--- PASS: TestFunctional/parallel/CertSync (72.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-848700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 ssh "sudo systemctl is-active crio": exit status 1 (12.7562441s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:52:22.894055    9944 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6466062s)
--- PASS: TestFunctional/parallel/License (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (10.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 version -o=json --components: (10.0183826s)
--- PASS: TestFunctional/parallel/Version/components (10.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls --format short --alsologtostderr: (8.0752451s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-848700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-848700
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-848700
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-848700 image ls --format short --alsologtostderr:
W0327 23:55:34.174616    5392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0327 23:55:34.262025    5392 out.go:291] Setting OutFile to fd 972 ...
I0327 23:55:34.262958    5392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:34.262958    5392 out.go:304] Setting ErrFile to fd 888...
I0327 23:55:34.262958    5392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:34.285891    5392 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:34.286418    5392 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:34.287319    5392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:36.800557    5392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:36.800557    5392 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:36.814361    5392 ssh_runner.go:195] Run: systemctl --version
I0327 23:55:36.814361    5392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:39.135158    5392 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:39.135158    5392 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:39.135267    5392 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
I0327 23:55:41.927231    5392 main.go:141] libmachine: [stdout =====>] : 172.28.236.250

                                                
                                                
I0327 23:55:41.927503    5392 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:41.927580    5392 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
I0327 23:55:42.035487    5392 ssh_runner.go:235] Completed: systemctl --version: (5.2210314s)
I0327 23:55:42.046448    5392 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls --format table --alsologtostderr: (8.1292143s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-848700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | e289a478ace02 | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/kube-apiserver              | v1.29.3           | 39f995c9f1996 | 127MB  |
| registry.k8s.io/kube-proxy                  | v1.29.3           | a1d263b5dc5b0 | 82.4MB |
| docker.io/library/nginx                     | latest            | 92b11f67642b6 | 187MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-848700 | 54bdf6b609b40 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-848700 | f1384490cf8c1 | 30B    |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/kube-controller-manager     | v1.29.3           | 6052a25da3f97 | 122MB  |
| registry.k8s.io/kube-scheduler              | v1.29.3           | 8c390d98f50c0 | 59.6MB |
| gcr.io/google-containers/addon-resizer      | functional-848700 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-848700 image ls --format table --alsologtostderr:
W0327 23:55:57.830615    9356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0327 23:55:57.917805    9356 out.go:291] Setting OutFile to fd 972 ...
I0327 23:55:57.932054    9356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:57.932603    9356 out.go:304] Setting ErrFile to fd 844...
I0327 23:55:57.932603    9356 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:57.952615    9356 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:57.952615    9356 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:57.953753    9356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:56:00.373907    9356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:56:00.374054    9356 main.go:141] libmachine: [stderr =====>] : 
I0327 23:56:00.399677    9356 ssh_runner.go:195] Run: systemctl --version
I0327 23:56:00.399677    9356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:56:02.813908    9356 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:56:02.814921    9356 main.go:141] libmachine: [stderr =====>] : 
I0327 23:56:02.814921    9356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
I0327 23:56:05.636293    9356 main.go:141] libmachine: [stdout =====>] : 172.28.236.250

                                                
                                                
I0327 23:56:05.636293    9356 main.go:141] libmachine: [stderr =====>] : 
I0327 23:56:05.636895    9356 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
I0327 23:56:05.740066    9356 ssh_runner.go:235] Completed: systemctl --version: (5.3403574s)
I0327 23:56:05.749111    9356 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls --format json --alsologtostderr: (7.8754069s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-848700 image ls --format json --alsologtostderr:
[{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"59600000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"122000000"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"127000000"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"e289a478ace02cd72f0a7
1a5b2ec0594495e1fae85faa10aae3b0da530812608","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"f1384490cf8c1bf53cb9e43a53fbc42b8e1aaf7b3cf59ba4f9b1dc8223f00daf","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-848700"],"size":"30"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"82400000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":[
"registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-848700"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-848700 image ls --format json --alsologtostderr:
W0327 23:55:50.313214   12440 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0327 23:55:50.397983   12440 out.go:291] Setting OutFile to fd 952 ...
I0327 23:55:50.398983   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:50.398983   12440 out.go:304] Setting ErrFile to fd 972...
I0327 23:55:50.398983   12440 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:50.414985   12440 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:50.414985   12440 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:50.415989   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:52.770898   12440 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:52.770898   12440 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:52.782879   12440 ssh_runner.go:195] Run: systemctl --version
I0327 23:55:52.783888   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:55.085892   12440 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:55.086080   12440 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:55.086080   12440 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
I0327 23:55:57.867368   12440 main.go:141] libmachine: [stdout =====>] : 172.28.236.250

                                                
                                                
I0327 23:55:57.867368   12440 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:57.868016   12440 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
I0327 23:55:57.977946   12440 ssh_runner.go:235] Completed: systemctl --version: (5.1939897s)
I0327 23:55:57.990243   12440 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls --format yaml --alsologtostderr: (8.0711787s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-848700 image ls --format yaml --alsologtostderr:
- id: e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "127000000"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "122000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: f1384490cf8c1bf53cb9e43a53fbc42b8e1aaf7b3cf59ba4f9b1dc8223f00daf
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-848700
size: "30"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "59600000"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-848700
size: "32900000"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "82400000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-848700 image ls --format yaml --alsologtostderr:
W0327 23:55:42.242208    3980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0327 23:55:42.336626    3980 out.go:291] Setting OutFile to fd 952 ...
I0327 23:55:42.338228    3980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:42.338228    3980 out.go:304] Setting ErrFile to fd 536...
I0327 23:55:42.338228    3980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:42.360569    3980 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:42.361102    3980 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:42.361849    3980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:44.716057    3980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:44.716815    3980 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:44.731525    3980 ssh_runner.go:195] Run: systemctl --version
I0327 23:55:44.731525    3980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:47.148071    3980 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:47.148071    3980 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:47.148275    3980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
I0327 23:55:49.932510    3980 main.go:141] libmachine: [stdout =====>] : 172.28.236.250

                                                
                                                
I0327 23:55:49.932510    3980 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:49.933895    3980 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
I0327 23:55:50.078116    3980 ssh_runner.go:235] Completed: systemctl --version: (5.3465596s)
I0327 23:55:50.089729    3980 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-848700 ssh pgrep buildkitd: exit status 1 (10.1518337s)

                                                
                                                
** stderr ** 
	W0327 23:55:43.032386    9960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image build -t localhost/my-image:functional-848700 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image build -t localhost/my-image:functional-848700 testdata\build --alsologtostderr: (10.5950675s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-848700 image build -t localhost/my-image:functional-848700 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3084c0c794d9
---> Removed intermediate container 3084c0c794d9
---> 281a80dd1504
Step 3/3 : ADD content.txt /
---> 54bdf6b609b4
Successfully built 54bdf6b609b4
Successfully tagged localhost/my-image:functional-848700
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-848700 image build -t localhost/my-image:functional-848700 testdata\build --alsologtostderr:
W0327 23:55:53.190562     972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0327 23:55:53.279164     972 out.go:291] Setting OutFile to fd 968 ...
I0327 23:55:53.295182     972 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:53.295182     972 out.go:304] Setting ErrFile to fd 876...
I0327 23:55:53.295182     972 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 23:55:53.312157     972 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:53.329831     972 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 23:55:53.331169     972 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:55.626261     972 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:55.626620     972 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:55.644424     972 ssh_runner.go:195] Run: systemctl --version
I0327 23:55:55.644424     972 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-848700 ).state
I0327 23:55:57.926900     972 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0327 23:55:57.926900     972 main.go:141] libmachine: [stderr =====>] : 
I0327 23:55:57.926900     972 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-848700 ).networkadapters[0]).ipaddresses[0]
I0327 23:56:00.875209     972 main.go:141] libmachine: [stdout =====>] : 172.28.236.250

                                                
                                                
I0327 23:56:00.875209     972 main.go:141] libmachine: [stderr =====>] : 
I0327 23:56:00.875998     972 sshutil.go:53] new ssh client: &{IP:172.28.236.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-848700\id_rsa Username:docker}
I0327 23:56:01.004106     972 ssh_runner.go:235] Completed: systemctl --version: (5.3596508s)
I0327 23:56:01.004106     972 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3806411289.tar
I0327 23:56:01.019458     972 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0327 23:56:01.064979     972 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3806411289.tar
I0327 23:56:01.073979     972 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3806411289.tar: stat -c "%s %y" /var/lib/minikube/build/build.3806411289.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3806411289.tar': No such file or directory
I0327 23:56:01.073979     972 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3806411289.tar --> /var/lib/minikube/build/build.3806411289.tar (3072 bytes)
I0327 23:56:01.148622     972 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3806411289
I0327 23:56:01.189693     972 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3806411289 -xf /var/lib/minikube/build/build.3806411289.tar
I0327 23:56:01.212583     972 docker.go:360] Building image: /var/lib/minikube/build/build.3806411289
I0327 23:56:01.228251     972 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-848700 /var/lib/minikube/build/build.3806411289
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0327 23:56:03.530887     972 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-848700 /var/lib/minikube/build/build.3806411289: (2.302622s)
I0327 23:56:03.543785     972 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3806411289
I0327 23:56:03.601086     972 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3806411289.tar
I0327 23:56:03.629213     972 build_images.go:217] Built localhost/my-image:functional-848700 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.3806411289.tar
I0327 23:56:03.629364     972 build_images.go:133] succeeded building to: functional-848700
I0327 23:56:03.629415     972 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (7.7335757s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.4487646s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-848700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr: (17.8726303s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (8.8552297s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (26.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (51.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-848700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-848700"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-848700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-848700": (33.2995063s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-848700 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-848700 docker-env | Invoke-Expression ; docker images": (18.3117935s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (51.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr: (14.419027s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (9.3744494s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (23.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (31.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.1405249s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-848700
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image load --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr: (17.5973605s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (9.8037702s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (31.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-848700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-848700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-ctlsp" [8f273842-c693-4b86-a858-e6f93c6aa34b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-ctlsp" [8f273842-c693-4b86-a858-e6f93c6aa34b] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.0197989s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3404: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4936: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 service list: (15.4389975s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (15.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-848700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [555e4d79-2bad-4c9a-a6fe-254b90376c3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [555e4d79-2bad-4c9a-a6fe-254b90376c3d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0177445s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image save gcr.io/google-containers/addon-resizer:functional-848700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image save gcr.io/google-containers/addon-resizer:functional-848700 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.7816728s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (10.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 service list -o json: (14.2520046s)
functional_test.go:1490: Took "14.2521835s" to run "out/minikube-windows-amd64.exe -p functional-848700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (16.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image rm gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image rm gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr: (8.4015822s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (8.3557417s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (16.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-848700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13040: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (10.9239352s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image ls: (8.5166844s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (19.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-848700
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 image save --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 image save --daemon gcr.io/google-containers/addon-resizer:functional-848700 --alsologtostderr: (11.1167169s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-848700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (11.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.925587s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (11.7877643s)
functional_test.go:1311: Took "11.7877643s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "283.7151ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (11.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (11.158448s)
functional_test.go:1362: Took "11.1589842s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "275.5759ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (11.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2: (2.7286052s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2: (2.6110302s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-848700 update-context --alsologtostderr -v=2: (2.7432642s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-848700
--- PASS: TestFunctional/delete_addon-resizer_images (0.51s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-848700
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-848700
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (871.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-170000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0328 00:03:29.004949   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.019740   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.035115   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.067267   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.112796   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.206704   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.381295   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:29.715851   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:30.365424   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:31.652178   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:34.220103   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:39.343599   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:03:49.598108   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:04:10.086676   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:04:51.049987   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:06:12.981842   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:08:29.007449   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:08:56.836873   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:13:29.007562   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-170000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (13m52.6414247s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 status -v=7 --alsologtostderr: (39.0703594s)
--- PASS: TestMultiControlPlane/serial/StartCluster (871.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-170000 -- rollout status deployment/busybox: (4.0837536s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- nslookup kubernetes.io: (1.9137534s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-jw6s4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-lb47v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-170000 -- exec busybox-7fdf7869d9-shnp5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (268.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-170000 -v=7 --alsologtostderr
E0328 00:18:29.005196   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:19:52.217786   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-170000 -v=7 --alsologtostderr: (3m36.6484345s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-170000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-170000 status -v=7 --alsologtostderr: (51.6463912s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (268.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-170000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (30.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (30.5192859s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (30.52s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (209.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-951500 --driver=hyperv
E0328 00:36:32.238588   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-951500 --driver=hyperv: (3m29.7536674s)
--- PASS: TestImageBuild/serial/Setup (209.75s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-951500
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-951500: (10.1858182s)
--- PASS: TestImageBuild/serial/NormalBuild (10.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-951500
E0328 00:38:29.014792   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-951500: (9.5935772s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.59s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-951500
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-951500: (8.2337558s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.23s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-951500
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-951500: (8.1146113s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (252.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-873600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0328 00:43:29.025895   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-873600 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m12.8440387s)
--- PASS: TestJSONOutput/start/Command (252.84s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-873600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-873600 --output=json --user=testUser: (8.4351504s)
--- PASS: TestJSONOutput/pause/Command (8.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-873600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-873600 --output=json --user=testUser: (8.1544013s)
--- PASS: TestJSONOutput/unpause/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (36.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-873600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-873600 --output=json --user=testUser: (36.7274435s)
--- PASS: TestJSONOutput/stop/Command (36.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.61s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-093800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-093800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (308.905ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ac573eb-39a2-4628-8e2d-c31ac5ddf641","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-093800] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"da71baa2-50bb-417f-a711-c2b62530f4c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"1bce63e6-2d0c-46ab-93f4-b376e49823d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0b412a02-4d66-4a36-92ec-50bcecfac611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"a8ea4ce2-4faf-45b2-bab8-b8bef8257dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"2bc49ee1-a8d3-4fd7-82a1-1ab81a963e08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"237717fd-ede5-4194-aade-bd7b380db99f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 00:44:58.364271    8060 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-093800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-093800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-093800: (1.3027871s)
--- PASS: TestErrorJSONOutput (1.61s)

                                                
                                    
x
+
TestMainNoArgs (0.3s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.30s)

                                                
                                    
x
+
TestMinikubeProfile (563.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-240600 --driver=hyperv
E0328 00:48:29.025479   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-240600 --driver=hyperv: (3m29.6139531s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-240600 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-240600 --driver=hyperv: (3m31.1749512s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-240600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.736763s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-240600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.5242613s)
helpers_test.go:175: Cleaning up "second-240600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-240600
E0328 00:53:12.259107   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 00:53:29.027151   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-240600: (48.3731656s)
helpers_test.go:175: Cleaning up "first-240600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-240600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-240600: (47.9349686s)
--- PASS: TestMinikubeProfile (563.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (165.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-133400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-133400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m44.4581061s)
--- PASS: TestMountStart/serial/StartWithMountFirst (165.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (10.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-133400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-133400 ssh -- ls /minikube-host: (10.1019024s)
--- PASS: TestMountStart/serial/VerifyMountFirst (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (166.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-133400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0328 00:58:29.026123   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-133400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m45.562818s)
--- PASS: TestMountStart/serial/StartWithMountSecond (166.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host: (9.9841264s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.98s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (29.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-133400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-133400 --alsologtostderr -v=5: (29.5776323s)
--- PASS: TestMountStart/serial/DeleteFirst (29.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (10.12s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host: (10.1145294s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (10.12s)

                                                
                                    
x
+
TestMountStart/serial/Stop (28.01s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-133400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-133400: (28.011819s)
--- PASS: TestMountStart/serial/Stop (28.01s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (124.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-133400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-133400: (2m3.8338568s)
--- PASS: TestMountStart/serial/RestartStopped (124.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host
E0328 01:03:29.020779   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-133400 ssh -- ls /minikube-host: (10.0572593s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (453.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-240000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0328 01:08:29.024983   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 01:09:52.271083   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-240000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m7.4390438s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr: (26.1026504s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (453.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- rollout status deployment/busybox: (3.1558443s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- nslookup kubernetes.io: (1.9165519s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-ct428 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-240000 -- exec busybox-7fdf7869d9-zgwm4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (246.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-240000 -v 3 --alsologtostderr
E0328 01:13:29.034863   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-240000 -v 3 --alsologtostderr: (3m27.6548613s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr: (38.4507985s)
--- PASS: TestMultiNode/serial/AddNode (246.11s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-240000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (12.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.7790056s)
--- PASS: TestMultiNode/serial/ProfileList (12.78s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (385.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 status --output json --alsologtostderr: (38.5911656s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000:/home/docker/cp-test.txt: (10.077832s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt": (10.0160975s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000.txt: (10.1286828s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt"
E0328 01:18:29.027280   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt": (10.0595348s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt multinode-240000-m02:/home/docker/cp-test_multinode-240000_multinode-240000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt multinode-240000-m02:/home/docker/cp-test_multinode-240000_multinode-240000-m02.txt: (17.5154775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt": (10.1451603s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test_multinode-240000_multinode-240000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test_multinode-240000_multinode-240000-m02.txt": (9.9678434s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt multinode-240000-m03:/home/docker/cp-test_multinode-240000_multinode-240000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000:/home/docker/cp-test.txt multinode-240000-m03:/home/docker/cp-test_multinode-240000_multinode-240000-m03.txt: (17.5794843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test.txt": (10.0666003s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test_multinode-240000_multinode-240000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test_multinode-240000_multinode-240000-m03.txt": (10.1344158s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000-m02:/home/docker/cp-test.txt: (10.076965s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt": (10.0471852s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m02.txt: (9.922385s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt": (10.1067083s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt multinode-240000:/home/docker/cp-test_multinode-240000-m02_multinode-240000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt multinode-240000:/home/docker/cp-test_multinode-240000-m02_multinode-240000.txt: (17.545019s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt": (10.1746858s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test_multinode-240000-m02_multinode-240000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test_multinode-240000-m02_multinode-240000.txt": (10.0663878s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt multinode-240000-m03:/home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m02:/home/docker/cp-test.txt multinode-240000-m03:/home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt: (17.7622135s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test.txt": (10.2872046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test_multinode-240000-m02_multinode-240000-m03.txt": (10.1043625s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp testdata\cp-test.txt multinode-240000-m03:/home/docker/cp-test.txt: (10.1838929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt": (10.0964392s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2131314308\001\cp-test_multinode-240000-m03.txt: (9.9790772s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt": (10.1525816s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt multinode-240000:/home/docker/cp-test_multinode-240000-m03_multinode-240000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt multinode-240000:/home/docker/cp-test_multinode-240000-m03_multinode-240000.txt: (17.4001802s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt": (10.080862s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test_multinode-240000-m03_multinode-240000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000 "sudo cat /home/docker/cp-test_multinode-240000-m03_multinode-240000.txt": (10.0960065s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt multinode-240000-m02:/home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 cp multinode-240000-m03:/home/docker/cp-test.txt multinode-240000-m02:/home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt: (17.4487402s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt"
E0328 01:23:29.027441   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m03 "sudo cat /home/docker/cp-test.txt": (10.1240961s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 ssh -n multinode-240000-m02 "sudo cat /home/docker/cp-test_multinode-240000-m03_multinode-240000-m02.txt": (9.9604159s)
--- PASS: TestMultiNode/serial/CopyFile (385.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (82.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 node stop m03: (26.3639052s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-240000 status: exit status 7 (27.8217457s)

                                                
                                                
-- stdout --
	multinode-240000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-240000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-240000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:24:06.685489    1588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-240000 status --alsologtostderr: exit status 7 (27.8564143s)

                                                
                                                
-- stdout --
	multinode-240000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-240000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-240000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:24:34.516928    5912 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0328 01:24:34.599700    5912 out.go:291] Setting OutFile to fd 976 ...
	I0328 01:24:34.600706    5912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:24:34.600706    5912 out.go:304] Setting ErrFile to fd 636...
	I0328 01:24:34.600706    5912 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0328 01:24:34.615195    5912 out.go:298] Setting JSON to false
	I0328 01:24:34.615195    5912 mustload.go:65] Loading cluster: multinode-240000
	I0328 01:24:34.615819    5912 notify.go:220] Checking for updates...
	I0328 01:24:34.616036    5912 config.go:182] Loaded profile config "multinode-240000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0328 01:24:34.616036    5912 status.go:255] checking status of multinode-240000 ...
	I0328 01:24:34.617523    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:24:36.959159    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:36.959159    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:36.959159    5912 status.go:330] multinode-240000 host status = "Running" (err=<nil>)
	I0328 01:24:36.959159    5912 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:24:36.959929    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:24:39.256082    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:39.256082    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:39.256856    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:24:41.995610    5912 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:24:41.995610    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:41.995740    5912 host.go:66] Checking if "multinode-240000" exists ...
	I0328 01:24:42.012600    5912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 01:24:42.012600    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000 ).state
	I0328 01:24:44.298427    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:44.299307    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:44.299470    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000 ).networkadapters[0]).ipaddresses[0]
	I0328 01:24:47.052096    5912 main.go:141] libmachine: [stdout =====>] : 172.28.227.122
	
	I0328 01:24:47.052952    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:47.053000    5912 sshutil.go:53] new ssh client: &{IP:172.28.227.122 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000\id_rsa Username:docker}
	I0328 01:24:47.146452    5912 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.1338178s)
	I0328 01:24:47.159888    5912 ssh_runner.go:195] Run: systemctl --version
	I0328 01:24:47.184680    5912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:24:47.219463    5912 kubeconfig.go:125] found "multinode-240000" server: "https://172.28.227.122:8443"
	I0328 01:24:47.219548    5912 api_server.go:166] Checking apiserver status ...
	I0328 01:24:47.232032    5912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0328 01:24:47.275040    5912 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2234/cgroup
	W0328 01:24:47.296015    5912 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0328 01:24:47.312565    5912 ssh_runner.go:195] Run: ls
	I0328 01:24:47.320811    5912 api_server.go:253] Checking apiserver healthz at https://172.28.227.122:8443/healthz ...
	I0328 01:24:47.330421    5912 api_server.go:279] https://172.28.227.122:8443/healthz returned 200:
	ok
	I0328 01:24:47.330421    5912 status.go:422] multinode-240000 apiserver status = Running (err=<nil>)
	I0328 01:24:47.330421    5912 status.go:257] multinode-240000 status: &{Name:multinode-240000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0328 01:24:47.330421    5912 status.go:255] checking status of multinode-240000-m02 ...
	I0328 01:24:47.331440    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:24:49.633881    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:49.634143    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:49.634398    5912 status.go:330] multinode-240000-m02 host status = "Running" (err=<nil>)
	I0328 01:24:49.634398    5912 host.go:66] Checking if "multinode-240000-m02" exists ...
	I0328 01:24:49.635440    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:24:51.949300    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:51.950188    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:51.950188    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:24:54.736887    5912 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:24:54.736887    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:54.737846    5912 host.go:66] Checking if "multinode-240000-m02" exists ...
	I0328 01:24:54.750488    5912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0328 01:24:54.750488    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m02 ).state
	I0328 01:24:57.033195    5912 main.go:141] libmachine: [stdout =====>] : Running
	
	I0328 01:24:57.033255    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:57.033255    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-240000-m02 ).networkadapters[0]).ipaddresses[0]
	I0328 01:24:59.787323    5912 main.go:141] libmachine: [stdout =====>] : 172.28.230.250
	
	I0328 01:24:59.787323    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:24:59.787766    5912 sshutil.go:53] new ssh client: &{IP:172.28.230.250 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-240000-m02\id_rsa Username:docker}
	I0328 01:24:59.883198    5912 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.132675s)
	I0328 01:24:59.897168    5912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0328 01:24:59.924413    5912 status.go:257] multinode-240000-m02 status: &{Name:multinode-240000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0328 01:24:59.924413    5912 status.go:255] checking status of multinode-240000-m03 ...
	I0328 01:24:59.925578    5912 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-240000-m03 ).state
	I0328 01:25:02.212604    5912 main.go:141] libmachine: [stdout =====>] : Off
	
	I0328 01:25:02.212604    5912 main.go:141] libmachine: [stderr =====>] : 
	I0328 01:25:02.212604    5912 status.go:330] multinode-240000-m03 host status = "Stopped" (err=<nil>)
	I0328 01:25:02.212604    5912 status.go:343] host is not running, skipping remaining checks
	I0328 01:25:02.212604    5912 status.go:257] multinode-240000-m03 status: &{Name:multinode-240000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (82.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (194.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 node start m03 -v=7 --alsologtostderr
E0328 01:26:32.279264   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 node start m03 -v=7 --alsologtostderr: (2m35.7700149s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-240000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-240000 status -v=7 --alsologtostderr: (38.1561418s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (194.13s)

                                                
                                    
x
+
TestPreload (523.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-998300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0328 01:38:29.045669   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-998300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m12.6951566s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-998300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-998300 image pull gcr.io/k8s-minikube/busybox: (9.0877613s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-998300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-998300: (41.7587868s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-998300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0328 01:43:12.298712   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
E0328 01:43:29.035248   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-998300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m47.9462021s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-998300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-998300 image list: (7.7277482s)
helpers_test.go:175: Cleaning up "test-preload-998300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-998300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-998300: (44.3818265s)
--- PASS: TestPreload (523.60s)

                                                
                                    
x
+
TestScheduledStopWindows (347.8s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-583300 --memory=2048 --driver=hyperv
E0328 01:48:29.049246   10460 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-848700\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-583300 --memory=2048 --driver=hyperv: (3m31.0288881s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-583300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-583300 --schedule 5m: (11.3706385s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-583300 -n scheduled-stop-583300
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-583300 -n scheduled-stop-583300: exit status 1 (10.0175593s)

                                                
                                                
** stderr ** 
	W0328 01:49:45.153601    2632 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-583300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-583300 -- sudo systemctl show minikube-scheduled-stop --no-page: (10.1699146s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-583300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-583300 --schedule 5s: (11.3186914s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-583300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-583300: exit status 7 (2.5445372s)

                                                
                                                
-- stdout --
	scheduled-stop-583300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:51:16.663458   12600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-583300 -n scheduled-stop-583300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-583300 -n scheduled-stop-583300: exit status 7 (2.532421s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:51:19.214898    4596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-583300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-583300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-583300: (28.8054907s)
--- PASS: TestScheduledStopWindows (347.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1154.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.935927273.exe start -p running-upgrade-905300 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.935927273.exe start -p running-upgrade-905300 --memory=2200 --vm-driver=hyperv: (8m26.2543237s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-905300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-905300 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (9m29.4105814s)
helpers_test.go:175: Cleaning up "running-upgrade-905300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-905300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-905300: (1m17.7347863s)
--- PASS: TestRunningBinaryUpgrade (1154.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-905300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-905300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (437.3638ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-905300] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0328 01:51:50.571523   12668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (907.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2374548936.exe start -p stopped-upgrade-690800 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2374548936.exe start -p stopped-upgrade-690800 --memory=2200 --vm-driver=hyperv: (7m50.6180605s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2374548936.exe -p stopped-upgrade-690800 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2374548936.exe -p stopped-upgrade-690800 stop: (39.6446011s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-690800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-690800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m36.8239738s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (907.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-690800
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-690800: (9.9381241s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.94s)

                                                
                                    

Test skip (31/193)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-848700 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-848700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 3284: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-848700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-848700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0526475s)

                                                
                                                
-- stdout --
	* [functional-848700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:55:17.246955    9276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0327 23:55:17.325965    9276 out.go:291] Setting OutFile to fd 808 ...
	I0327 23:55:17.325965    9276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:17.325965    9276 out.go:304] Setting ErrFile to fd 976...
	I0327 23:55:17.325965    9276 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:17.349971    9276 out.go:298] Setting JSON to false
	I0327 23:55:17.354974    9276 start.go:129] hostinfo: {"hostname":"minikube6","uptime":6378,"bootTime":1711577338,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:55:17.354974    9276 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:55:17.357974    9276 out.go:177] * [functional-848700] minikube v1.33.0-beta.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:55:17.361978    9276 notify.go:220] Checking for updates...
	I0327 23:55:17.363969    9276 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:55:17.366962    9276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:55:17.370978    9276 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:55:17.373013    9276 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:55:17.375968    9276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:55:17.379960    9276 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:55:17.380960    9276 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-848700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-848700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0169824s)

                                                
                                                
-- stdout --
	* [functional-848700] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0327 23:55:18.851179   10892 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0327 23:55:18.928785   10892 out.go:291] Setting OutFile to fd 852 ...
	I0327 23:55:18.928785   10892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:18.928785   10892 out.go:304] Setting ErrFile to fd 856...
	I0327 23:55:18.928785   10892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 23:55:18.952791   10892 out.go:298] Setting JSON to false
	I0327 23:55:18.957809   10892 start.go:129] hostinfo: {"hostname":"minikube6","uptime":6380,"bootTime":1711577338,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4170 Build 19045.4170","kernelVersion":"10.0.19045.4170 Build 19045.4170","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0327 23:55:18.958823   10892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 23:55:18.961782   10892 out.go:177] * [functional-848700] minikube v1.33.0-beta.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4170 Build 19045.4170
	I0327 23:55:18.964770   10892 notify.go:220] Checking for updates...
	I0327 23:55:18.966770   10892 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0327 23:55:18.970783   10892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 23:55:18.972770   10892 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0327 23:55:18.975785   10892 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 23:55:18.977777   10892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 23:55:18.981801   10892 config.go:182] Loaded profile config "functional-848700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 23:55:18.982784   10892 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard